In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
The rubric contains optional "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this IPython notebook.
In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).

In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!
We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.
In the code cell below, we import a dataset of dog images. We populate a few variables through the use of the load_files function from the scikit-learn library:
train_files, valid_files, test_files - numpy arrays containing file paths to imagestrain_targets, valid_targets, test_targets - numpy arrays containing onehot-encoded classification labels dog_names - list of string-valued dog breed names for translating labelsfrom sklearn.datasets import load_files
from keras.utils import np_utils
import numpy as np
from glob import glob
# define function to load train, test, and validation datasets
def load_dataset(path):
data = load_files(path)
dog_files = np.array(data['filenames'])
dog_targets = np_utils.to_categorical(np.array(data['target']), 133)
return dog_files, dog_targets
# load train, test, and validation datasets
train_files, train_targets = load_dataset('dogImages/train')
valid_files, valid_targets = load_dataset('dogImages/valid')
test_files, test_targets = load_dataset('dogImages/test')
# load list of dog names
dog_names = [item[20:-1] for item in sorted(glob("dogImages/train/*/"))]
# print statistics about the dataset
print('There are %d total dog categories.' % len(dog_names))
print('There are %s total dog images.\n' % len(np.hstack([train_files, valid_files, test_files])))
print('There are %d training dog images.' % len(train_files))
print('There are %d validation dog images.' % len(valid_files))
print('There are %d test dog images.'% len(test_files))
Using TensorFlow backend.
There are 133 total dog categories. There are 8351 total dog images. There are 6680 training dog images. There are 835 validation dog images. There are 836 test dog images.
In the code cell below, we import a dataset of human images, where the file paths are stored in the numpy array human_files.
import random
random.seed(8675309)
# load filenames in shuffled human dataset
human_files = np.array(glob("lfw/*/*"))
huma_profiles = np.array(glob("human_profiles/*"))
dog_profiles = np.array(glob("dog_profiles/*"))
random.shuffle(human_files)
# print statistics about the dataset
print('There are %d total human images.' % len(human_files))
There are 13233 total human images.
We use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images. OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the haarcascades directory.
In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')
# load color (BGR) image
img = cv2.imread(human_files[3])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# find faces in image
faces = face_cascade.detectMultiScale(gray)
# print number of faces detected in the image
print('Number of faces detected:', len(faces))
# get bounding box for each detected face
for (x,y,w,h) in faces:
# add bounding box to color image
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
Number of faces detected: 3
Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The detectMultiScale function executes the classifier stored in face_cascade and takes the grayscale image as a parameter.
In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.
We can use this procedure to write a function that returns True if a human face is detected in an image and False otherwise. This function, aptly named face_detector, takes a string-valued file path to an image as input and appears in the code block below.
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
img = cv2.imread(img_path)
normalizedImg = cv2.normalize(img, img, 0, 255, cv2.NORM_MINMAX)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
return len(faces) > 0
Question 1: Use the code cell below to test the performance of the face_detector function.
human_files have a detected human face? dog_files have a detected human face? Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays human_files_short and dog_files_short.
Answer: I am not sure I got the question right because 98 percent is really big thing. I got 98 percent correct guess for human and 10 percent for false positive gussing human when image is dog
human_files_short = human_files[:100]
dog_files_short = train_files[:100]
# Do NOT modify the code above this line.
human_guess = 0
dog_guess = 0
type_one = 0
type_two = 0
for i in range(len(human_files_short)):
if(face_detector(human_files_short[i])):
human_guess +=1
for i in range(len(dog_files_short)):
if(face_detector(dog_files_short[i])):
dog_guess +=1
print("Human Dectected: "+str(human_guess))
print("Dog Dectected: "+str(dog_guess))
## TODO: Test the performance of the face_detector algorithm
## on the images in human_files_short and dog_files_short.
Human Dectected: 98 Dog Dectected: 10
Question 2: This algorithmic choice necessitates that we communicate to the user that we accept human images only when they provide a clear view of a face (otherwise, we risk having unneccessarily frustrated users!). In your opinion, is this a reasonable expectation to pose on the user? If not, can you think of a way to detect humans in images that does not necessitate an image with a clearly presented face?
Answer: I blieve with accuracy of guessing human face with 99 percent is really good already. However, 11 percent false positive might not be an ideal solution. I think if we normalize the image around mean or standard deviation, I believe it would give a better result in calse of face is not clearly presented. In fact, I try to normalize the image before input into the system and reduce the false postive to 10 percent but it also reduce the accuracy of guessing human to 98 percent.
We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this optional task, report performance on each of the datasets.
## (Optional) TODO: Report the performance of another
## face detection algorithm on the LFW dataset
### Feel free to use as many code cells as needed.
# Try to use https://github.com/davidsandberg/facenet
In this section, we use a pre-trained ResNet-50 model to detect dogs in images. Our first line of code downloads the ResNet-50 model, along with weights that have been trained on ImageNet, a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of 1000 categories. Given an image, this pre-trained ResNet-50 model returns a prediction (derived from the available categories in ImageNet) for the object that is contained in the image.
from keras.applications.resnet50 import ResNet50
# define ResNet50 model
ResNet50_model = ResNet50(weights='imagenet')
When using TensorFlow as backend, Keras CNNs require a 4D array (which we'll also refer to as a 4D tensor) as input, with shape
$$ (\text{nb_samples}, \text{rows}, \text{columns}, \text{channels}), $$where nb_samples corresponds to the total number of images (or samples), and rows, columns, and channels correspond to the number of rows, columns, and channels for each image, respectively.
The path_to_tensor function below takes a string-valued file path to a color image as input and returns a 4D tensor suitable for supplying to a Keras CNN. The function first loads the image and resizes it to a square image that is $224 \times 224$ pixels. Next, the image is converted to an array, which is then resized to a 4D tensor. In this case, since we are working with color images, each image has three channels. Likewise, since we are processing a single image (or sample), the returned tensor will always have shape
The paths_to_tensor function takes a numpy array of string-valued image paths as input and returns a 4D tensor with shape
Here, nb_samples is the number of samples, or number of images, in the supplied array of image paths. It is best to think of nb_samples as the number of 3D tensors (where each 3D tensor corresponds to a different image) in your dataset!
from keras.preprocessing import image
from tqdm import tqdm
def path_to_tensor(img_path):
# loads RGB image as PIL.Image.Image type
img = image.load_img(img_path, target_size=(224, 224))
# convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
x = image.img_to_array(img)
# convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
return np.expand_dims(x, axis=0)
def paths_to_tensor(img_paths):
list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)]
return np.vstack(list_of_tensors)
Getting the 4D tensor ready for ResNet-50, and for any other pre-trained model in Keras, requires some additional processing. First, the RGB image is converted to BGR by reordering the channels. All pre-trained models have the additional normalization step that the mean pixel (expressed in RGB as $[103.939, 116.779, 123.68]$ and calculated from all pixels in all images in ImageNet) must be subtracted from every pixel in each image. This is implemented in the imported function preprocess_input. If you're curious, you can check the code for preprocess_input here.
Now that we have a way to format our image for supplying to ResNet-50, we are now ready to use the model to extract the predictions. This is accomplished with the predict method, which returns an array whose $i$-th entry is the model's predicted probability that the image belongs to the $i$-th ImageNet category. This is implemented in the ResNet50_predict_labels function below.
By taking the argmax of the predicted probability vector, we obtain an integer corresponding to the model's predicted object class, which we can identify with an object category through the use of this dictionary.
from keras.applications.resnet50 import preprocess_input, decode_predictions
def ResNet50_predict_labels(img_path):
# returns prediction vector for image located at img_path
img = preprocess_input(path_to_tensor(img_path))
return np.argmax(ResNet50_model.predict(img))
While looking at the dictionary, you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from 'Chihuahua' to 'Mexican hairless'. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained ResNet-50 model, we need only check if the ResNet50_predict_labels function above returns a value between 151 and 268 (inclusive).
We use these ideas to complete the dog_detector function below, which returns True if a dog is detected in an image (and False if not).
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
prediction = ResNet50_predict_labels(img_path)
return ((prediction <= 268) & (prediction >= 151))
Question 3: Use the code cell below to test the performance of your dog_detector function.
human_files_short have a detected dog? dog_files_short have a detected dog?Answer: Renet perform with amazing outcome of guessing 100 percent right with no false positvie at all. :)
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
human_guess = 0
dog_guess = 0
type_one = 0
type_two = 0
for i in range(len(human_files_short)):
if(dog_detector(human_files_short[i])):
human_guess +=1
for i in range(len(dog_files_short)):
if(dog_detector(dog_files_short[i])):
dog_guess +=1
print("Human Dectected: "+str(human_guess))
print("Dog Dectected: "+str(dog_guess))
Human Dectected: 0 Dog Dectected: 100
Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN from scratch (so, you can't use transfer learning yet!), and you must attain a test accuracy of at least 1%. In Step 5 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.
Be careful with adding too many trainable layers! More parameters means longer training, which means you are more likely to need a GPU to accelerate the training process. Thankfully, Keras provides a handy estimate of the time that each epoch is likely to take; you can extrapolate this estimate to figure out how long it will take for your algorithm to train.
We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that even a human would have great difficulty in distinguishing between a Brittany and a Welsh Springer Spaniel.
| Brittany | Welsh Springer Spaniel |
|---|---|
![]() |
![]() |
It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).
| Curly-Coated Retriever | American Water Spaniel |
|---|---|
![]() |
![]() |
Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.
| Yellow Labrador | Chocolate Labrador | Black Labrador |
|---|---|---|
![]() |
![]() |
![]() |
We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.
Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!
We rescale the images by dividing every pixel in every image by 255.
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
# pre-process the data for Keras
train_tensors = paths_to_tensor(train_files).astype('float32')/255
valid_tensors = paths_to_tensor(valid_files).astype('float32')/255
test_tensors = paths_to_tensor(test_files).astype('float32')/255
100%|██████████| 6680/6680 [00:58<00:00, 115.01it/s] 100%|██████████| 835/835 [00:06<00:00, 128.07it/s] 100%|██████████| 836/836 [00:06<00:00, 128.79it/s]
Create a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line:
model.summary()
We have imported some Python modules to get you started, but feel free to import as many modules as you need. If you end up getting stuck, here's a hint that specifies a model that trains relatively fast on CPU and attains >1% test accuracy in 5 epochs:

Question 4: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. If you chose to use the hinted architecture above, describe why you think that CNN architecture should work well for the image classification task.
Answer: I choose to form my CNN with three convolutional layers without dropout because I want to just test my NN to pass 1 percent which there is no way it can be overfitted. With last three layers composed of global average pooling layer and two fully connected layers. I use real activation all the way through except at the end because it is a classification problem (softmax). In term of kernel_size = 3, this is not my argument but as I read online, when we choose kernel size with odd size, it tends to learn the balance between left and the right size (purely based on the structure and how convolutional function work).
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Dropout, Flatten, Dense
from keras.models import Sequential
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=3, padding='same', activation='relu', input_shape=(224, 224, 3)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=16,kernel_size=3,padding='same',activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32,kernel_size=3,padding='same',activation='relu'))
model.add(GlobalAveragePooling2D())
model.add(Dense(500,activation='relu'))
model.add(Dense(133,activation='softmax'))
### TODO: Define your architecture.
model.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 224, 224, 16) 448 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 112, 112, 16) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 112, 112, 16) 2320 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 56, 56, 16) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 56, 56, 32) 4640 _________________________________________________________________ global_average_pooling2d_1 ( (None, 32) 0 _________________________________________________________________ dense_1 (Dense) (None, 500) 16500 _________________________________________________________________ dense_2 (Dense) (None, 133) 66633 ================================================================= Total params: 90,541 Trainable params: 90,541 Non-trainable params: 0 _________________________________________________________________
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
Train your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.
You are welcome to augment the training data, but this is not a requirement.
from keras.callbacks import ModelCheckpoint
### TODO: specify the number of epochs that you would like to use to train the model.
epochs = 10
### Do NOT modify the code below this line.
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.from_scratch.hdf5',
verbose=1, save_best_only=True)
model.fit(train_tensors, train_targets,
validation_data=(valid_tensors, valid_targets),
epochs=epochs, batch_size=20, callbacks=[checkpointer], verbose=1)
Train on 6680 samples, validate on 835 samples Epoch 1/10 6660/6680 [============================>.] - ETA: 0s - loss: 4.8814 - acc: 0.0086Epoch 00000: val_loss improved from inf to 4.86267, saving model to saved_models/weights.best.from_scratch.hdf5 6680/6680 [==============================] - 34s - loss: 4.8817 - acc: 0.0085 - val_loss: 4.8627 - val_acc: 0.0120 Epoch 2/10 6660/6680 [============================>.] - ETA: 0s - loss: 4.8332 - acc: 0.0137Epoch 00001: val_loss improved from 4.86267 to 4.82297, saving model to saved_models/weights.best.from_scratch.hdf5 6680/6680 [==============================] - 33s - loss: 4.8332 - acc: 0.0136 - val_loss: 4.8230 - val_acc: 0.0156 Epoch 3/10 6660/6680 [============================>.] - ETA: 0s - loss: 4.7753 - acc: 0.0176Epoch 00002: val_loss improved from 4.82297 to 4.76941, saving model to saved_models/weights.best.from_scratch.hdf5 6680/6680 [==============================] - 33s - loss: 4.7756 - acc: 0.0175 - val_loss: 4.7694 - val_acc: 0.0204 Epoch 4/10 6660/6680 [============================>.] - ETA: 0s - loss: 4.7313 - acc: 0.0194Epoch 00003: val_loss improved from 4.76941 to 4.71106, saving model to saved_models/weights.best.from_scratch.hdf5 6680/6680 [==============================] - 33s - loss: 4.7314 - acc: 0.0193 - val_loss: 4.7111 - val_acc: 0.0228 Epoch 5/10 6660/6680 [============================>.] - ETA: 0s - loss: 4.6868 - acc: 0.0207Epoch 00004: val_loss improved from 4.71106 to 4.67190, saving model to saved_models/weights.best.from_scratch.hdf5 6680/6680 [==============================] - 33s - loss: 4.6869 - acc: 0.0210 - val_loss: 4.6719 - val_acc: 0.0323 Epoch 6/10 6660/6680 [============================>.] - ETA: 0s - loss: 4.6390 - acc: 0.0239Epoch 00005: val_loss improved from 4.67190 to 4.65457, saving model to saved_models/weights.best.from_scratch.hdf5 6680/6680 [==============================] - 33s - loss: 4.6385 - acc: 0.0241 - val_loss: 4.6546 - val_acc: 0.0275 Epoch 7/10 6660/6680 [============================>.] - ETA: 0s - loss: 4.5930 - acc: 0.0318Epoch 00006: val_loss improved from 4.65457 to 4.60826, saving model to saved_models/weights.best.from_scratch.hdf5 6680/6680 [==============================] - 33s - loss: 4.5928 - acc: 0.0319 - val_loss: 4.6083 - val_acc: 0.0371 Epoch 8/10 6660/6680 [============================>.] - ETA: 0s - loss: 4.5587 - acc: 0.0329Epoch 00007: val_loss improved from 4.60826 to 4.57111, saving model to saved_models/weights.best.from_scratch.hdf5 6680/6680 [==============================] - 33s - loss: 4.5588 - acc: 0.0329 - val_loss: 4.5711 - val_acc: 0.0431 Epoch 9/10 6660/6680 [============================>.] - ETA: 0s - loss: 4.5260 - acc: 0.0371Epoch 00008: val_loss improved from 4.57111 to 4.54604, saving model to saved_models/weights.best.from_scratch.hdf5 6680/6680 [==============================] - 33s - loss: 4.5258 - acc: 0.0370 - val_loss: 4.5460 - val_acc: 0.0407 Epoch 10/10 6660/6680 [============================>.] - ETA: 0s - loss: 4.4775 - acc: 0.0416Epoch 00009: val_loss did not improve 6680/6680 [==============================] - 33s - loss: 4.4764 - acc: 0.0415 - val_loss: 4.5780 - val_acc: 0.0323
<keras.callbacks.History at 0x7f13b7a17a20>
model.load_weights('saved_models/weights.best.from_scratch.hdf5')
Try out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 1%.
# get index of predicted dog breed for each image in test set
dog_breed_predictions = [np.argmax(model.predict(np.expand_dims(tensor, axis=0))) for tensor in test_tensors]
# report test accuracy
test_accuracy = 100*np.sum(np.array(dog_breed_predictions)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions)
# Compare test_target along y axis which is the label of the dog bread
print('Test accuracy: %.4f%%' % test_accuracy)
Test accuracy: 4.4258%
bottleneck_features = np.load('bottleneck_features/DogVGG16Data.npz')
train_VGG16 = bottleneck_features['train']
valid_VGG16 = bottleneck_features['valid']
test_VGG16 = bottleneck_features['test']
The model uses the the pre-trained VGG-16 model as a fixed feature extractor, where the last convolutional output of VGG-16 is fed as input to our model. We only add a global average pooling layer and a fully connected layer, where the latter contains one node for each dog category and is equipped with a softmax.
VGG16_model = Sequential()
VGG16_model.add(GlobalAveragePooling2D(input_shape=train_VGG16.shape[1:]))
#print(train_VGG16.shape[1:])
VGG16_model.add(Dense(133, activation='softmax'))
VGG16_model.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= global_average_pooling2d_2 ( (None, 512) 0 _________________________________________________________________ dense_3 (Dense) (None, 133) 68229 ================================================================= Total params: 68,229 Trainable params: 68,229 Non-trainable params: 0 _________________________________________________________________
VGG16_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.VGG16.hdf5',
verbose=1, save_best_only=True)
VGG16_model.fit(train_VGG16, train_targets,
validation_data=(valid_VGG16, valid_targets),
epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1)
Train on 6680 samples, validate on 835 samples Epoch 1/20 6440/6680 [===========================>..] - ETA: 0s - loss: 12.9789 - acc: 0.0984Epoch 00000: val_loss improved from inf to 11.64502, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 12.9212 - acc: 0.1016 - val_loss: 11.6450 - val_acc: 0.1737 Epoch 2/20 6440/6680 [===========================>..] - ETA: 0s - loss: 11.1642 - acc: 0.2424Epoch 00001: val_loss improved from 11.64502 to 10.99529, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 11.1813 - acc: 0.2422 - val_loss: 10.9953 - val_acc: 0.2503 Epoch 3/20 6440/6680 [===========================>..] - ETA: 0s - loss: 10.6056 - acc: 0.2922Epoch 00002: val_loss improved from 10.99529 to 10.51503, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 10.6225 - acc: 0.2906 - val_loss: 10.5150 - val_acc: 0.2766 Epoch 4/20 6460/6680 [============================>.] - ETA: 0s - loss: 10.3416 - acc: 0.3224Epoch 00003: val_loss improved from 10.51503 to 10.49811, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 10.3129 - acc: 0.3240 - val_loss: 10.4981 - val_acc: 0.2766 Epoch 5/20 6660/6680 [============================>.] - ETA: 0s - loss: 10.0797 - acc: 0.3372Epoch 00004: val_loss improved from 10.49811 to 10.18442, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 10.0809 - acc: 0.3373 - val_loss: 10.1844 - val_acc: 0.3042 Epoch 6/20 6660/6680 [============================>.] - ETA: 0s - loss: 9.7805 - acc: 0.3568Epoch 00005: val_loss improved from 10.18442 to 10.07770, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 9.7874 - acc: 0.3564 - val_loss: 10.0777 - val_acc: 0.3054 Epoch 7/20 6460/6680 [============================>.] - ETA: 0s - loss: 9.6735 - acc: 0.3707Epoch 00006: val_loss improved from 10.07770 to 9.89269, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 9.6520 - acc: 0.3716 - val_loss: 9.8927 - val_acc: 0.3198 Epoch 8/20 6440/6680 [===========================>..] - ETA: 0s - loss: 9.4175 - acc: 0.3916Epoch 00007: val_loss improved from 9.89269 to 9.77462, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 9.4145 - acc: 0.3916 - val_loss: 9.7746 - val_acc: 0.3377 Epoch 9/20 6460/6680 [============================>.] - ETA: 0s - loss: 9.3519 - acc: 0.4015Epoch 00008: val_loss improved from 9.77462 to 9.73219, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 9.3397 - acc: 0.4025 - val_loss: 9.7322 - val_acc: 0.3425 Epoch 10/20 6640/6680 [============================>.] - ETA: 0s - loss: 9.2559 - acc: 0.4075Epoch 00009: val_loss improved from 9.73219 to 9.62966, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 9.2634 - acc: 0.4070 - val_loss: 9.6297 - val_acc: 0.3449 Epoch 11/20 6500/6680 [============================>.] - ETA: 0s - loss: 9.0783 - acc: 0.4189Epoch 00010: val_loss improved from 9.62966 to 9.43903, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 9.0628 - acc: 0.4199 - val_loss: 9.4390 - val_acc: 0.3569 Epoch 12/20 6480/6680 [============================>.] - ETA: 0s - loss: 8.9518 - acc: 0.4293Epoch 00011: val_loss improved from 9.43903 to 9.37605, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 8.9323 - acc: 0.4301 - val_loss: 9.3761 - val_acc: 0.3557 Epoch 13/20 6460/6680 [============================>.] - ETA: 0s - loss: 8.8363 - acc: 0.4376Epoch 00012: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 8.8292 - acc: 0.4383 - val_loss: 9.3835 - val_acc: 0.3593 Epoch 14/20 6480/6680 [============================>.] - ETA: 0s - loss: 8.7624 - acc: 0.4410Epoch 00013: val_loss improved from 9.37605 to 9.19187, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 8.7336 - acc: 0.4421 - val_loss: 9.1919 - val_acc: 0.3545 Epoch 15/20 6460/6680 [============================>.] - ETA: 0s - loss: 8.4561 - acc: 0.4585Epoch 00014: val_loss improved from 9.19187 to 8.96856, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 8.4462 - acc: 0.4594 - val_loss: 8.9686 - val_acc: 0.3772 Epoch 16/20 6460/6680 [============================>.] - ETA: 0s - loss: 8.2893 - acc: 0.4663Epoch 00015: val_loss improved from 8.96856 to 8.83970, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 8.2868 - acc: 0.4666 - val_loss: 8.8397 - val_acc: 0.3868 Epoch 17/20 6620/6680 [============================>.] - ETA: 0s - loss: 8.1227 - acc: 0.4819Epoch 00016: val_loss improved from 8.83970 to 8.82470, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 8.1222 - acc: 0.4820 - val_loss: 8.8247 - val_acc: 0.3820 Epoch 18/20 6460/6680 [============================>.] - ETA: 0s - loss: 8.1225 - acc: 0.4876Epoch 00017: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 8.1114 - acc: 0.4885 - val_loss: 8.8647 - val_acc: 0.3832 Epoch 19/20 6660/6680 [============================>.] - ETA: 0s - loss: 8.0893 - acc: 0.4907Epoch 00018: val_loss improved from 8.82470 to 8.79023, saving model to saved_models/weights.best.VGG16.hdf5 6680/6680 [==============================] - 1s - loss: 8.0844 - acc: 0.4910 - val_loss: 8.7902 - val_acc: 0.3940 Epoch 20/20 6660/6680 [============================>.] - ETA: 0s - loss: 8.0791 - acc: 0.4929Epoch 00019: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 8.0696 - acc: 0.4934 - val_loss: 8.8433 - val_acc: 0.3952
<keras.callbacks.History at 0x7f13b77ecb38>
VGG16_model.load_weights('saved_models/weights.best.VGG16.hdf5')
Now, we can use the CNN to test how well it identifies breed within our test dataset of dog images. We print the test accuracy below.
# get index of predicted dog breed for each image in test set
VGG16_predictions = [np.argmax(VGG16_model.predict(np.expand_dims(feature, axis=0))) for feature in test_VGG16]
# report test accuracy
test_accuracy = 100*np.sum(np.array(VGG16_predictions)==np.argmax(test_targets, axis=1))/len(VGG16_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
Test accuracy: 40.1914%
from extract_bottleneck_features import *
def VGG16_predict_breed(img_path):
# extract bottleneck features
bottleneck_feature = extract_VGG16(path_to_tensor(img_path))
# obtain predicted vector
predicted_vector = VGG16_model.predict(bottleneck_feature)
# return dog breed that is predicted by the model
return dog_names[np.argmax(predicted_vector)]
You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.
In Step 4, we used transfer learning to create a CNN using VGG-16 bottleneck features. In this section, you must use the bottleneck features from a different pre-trained model. To make things easier for you, we have pre-computed the features for all of the networks that are currently available in Keras:
The files are encoded as such:
Dog{network}Data.npz
where {network}, in the above filename, can be one of VGG19, Resnet50, InceptionV3, or Xception. Pick one of the above architectures, download the corresponding bottleneck features, and store the downloaded file in the bottleneck_features/ folder in the repository.
In the code block below, extract the bottleneck features corresponding to the train, test, and validation sets by running the following:
bottleneck_features = np.load('bottleneck_features/Dog{network}Data.npz')
train_{network} = bottleneck_features['train']
valid_{network} = bottleneck_features['valid']
test_{network} = bottleneck_features['test']
### TODO: Obtain bottleneck features from another pre-trained CNN.
bottleneck_features = np.load('bottleneck_features/DogResnet50Data.npz')
train_model = bottleneck_features['train']
valid_model = bottleneck_features['valid']
test_model = bottleneck_features['test']
Create a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line:
<your model's name>.summary()
Question 5: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.
Answer: Instead of starting with gloverAveragePooling2D. I start with fully connected layer by frist reshaped the whole input to one dimensional tensor. The reason is I want to preserve all the information before start to fillter information by using globalAveragePooling2D. Then, I use purely fully connected with dropout following to avoid overfitting and improve the performance of guessing. I use relu all the way through except at the end I use softmax for classfication problem. I put an early stop warrning at with loss getting close to 0.0001. However, with eposed of 1000, it still not even close to get there. With additional 9 million parameter, somehow, my neural network stop with performing around 84 percent. My suspecision is that final output of Renest50 has a lot of information that not contributed to guessing a dog in general. I try to overfit the model to make sure the model have enough parameter to learn.
### TODO: Define your architecture.
from keras.layers import GlobalAveragePooling2D, Reshape,Flatten
choosing_model = Sequential()
#print(train_model.shape[1:])
#choosing_model.add(GlobalAveragePooling2D(input_shape=train_VGG16.shape[1:]))
#choosing_model.add(GlobalAveragePooling2D(input_shape=train_model.shape[1:]))
choosing_model.add(Reshape((2048,),input_shape=train_model.shape[1:]))
choosing_model.add(Dense(3000, activation="relu"))
choosing_model.add(Dropout(0.3))
choosing_model.add(Dense(1000, activation="relu"))
choosing_model.add(Dropout(0.2))
choosing_model.add(Dense(500, activation="relu"))
#choosing_model.add(Conv2D(filters=16,kernel_size=3,padding='same',activation='relu',input_shape=train_model[1:]))
#print(len(Xception_model.shape[1:]))
choosing_model.add(Dense(133, activation='softmax'))
choosing_model.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= reshape_1 (Reshape) (None, 2048) 0 _________________________________________________________________ dense_4 (Dense) (None, 3000) 6147000 _________________________________________________________________ dropout_1 (Dropout) (None, 3000) 0 _________________________________________________________________ dense_5 (Dense) (None, 1000) 3001000 _________________________________________________________________ dropout_2 (Dropout) (None, 1000) 0 _________________________________________________________________ dense_6 (Dense) (None, 500) 500500 _________________________________________________________________ dense_7 (Dense) (None, 133) 66633 ================================================================= Total params: 9,715,133 Trainable params: 9,715,133 Non-trainable params: 0 _________________________________________________________________
from keras.callbacks import Callback
class EarlyStoppingByLossVal(Callback):
def __init__(self, monitor='val_loss', value=0.00001, verbose=0):
super(Callback, self).__init__()
self.monitor = monitor
choosing_model.add(Dropout(0.2))
choosing_model.add(Dense(1000, activation="relu"))
self.value = value
self.verbose = verbose
def on_epoch_end(self, epoch, logs={}):
current = logs.get(self.monitor)
if current is None:
warnings.warn("Early stopping requires %s available!" % self.monitor, RuntimeWarning)
if current < self.value:
if self.verbose > 0:
print("Epoch %05d: early stopping THR" % epoch)
self.model.stop_training = True
### TODO: Compile the model.
from keras.optimizers import Adagrad,RMSprop
adagrad = Adagrad(lr=0.0001, epsilon=1e-08, decay=0.0)
rmpprop = RMSprop(lr=0.0001, rho=0.9, epsilon=1e-08, decay=0.0)
choosing_model.compile(loss='categorical_crossentropy', optimizer=adagrad, metrics=['accuracy'])
Train your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.
You are welcome to augment the training data, but this is not a requirement.
### TODO: Train the model.
callbacks = [
EarlyStoppingByLossVal(monitor='val_loss', value=0.00001, verbose=1),
# EarlyStopping(monitor='val_loss', patience=2, verbose=0),
ModelCheckpoint(filepath='saved_models/weights.best.Resnet50.hdf5',
verbose=1, save_best_only=True),
]
history = choosing_model.fit(train_model, train_targets,
validation_data=(valid_model, valid_targets),
epochs=1000, batch_size=128, callbacks=callbacks, verbose=1)
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
Train on 6680 samples, validate on 835 samples Epoch 1/1000 6656/6680 [============================>.] - ETA: 0s - loss: 4.7288 - acc: 0.0410Epoch 00000: val_loss improved from inf to 4.40547, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 2s - loss: 4.7288 - acc: 0.0410 - val_loss: 4.4055 - val_acc: 0.1964 Epoch 2/1000 6528/6680 [============================>.] - ETA: 0s - loss: 4.3334 - acc: 0.1350Epoch 00001: val_loss improved from 4.40547 to 4.03840, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 4.3303 - acc: 0.1352 - val_loss: 4.0384 - val_acc: 0.3305 Epoch 3/1000 6656/6680 [============================>.] - ETA: 0s - loss: 4.0027 - acc: 0.2243Epoch 00002: val_loss improved from 4.03840 to 3.68692, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 4.0029 - acc: 0.2241 - val_loss: 3.6869 - val_acc: 0.4168 Epoch 4/1000 6528/6680 [============================>.] - ETA: 0s - loss: 3.7076 - acc: 0.2934Epoch 00003: val_loss improved from 3.68692 to 3.35780, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 3.7011 - acc: 0.2957 - val_loss: 3.3578 - val_acc: 0.4838 Epoch 5/1000 6656/6680 [============================>.] - ETA: 0s - loss: 3.4127 - acc: 0.3481Epoch 00004: val_loss improved from 3.35780 to 3.05750, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 3.4127 - acc: 0.3479 - val_loss: 3.0575 - val_acc: 0.5293 Epoch 6/1000 6656/6680 [============================>.] - ETA: 0s - loss: 3.1474 - acc: 0.4011Epoch 00005: val_loss improved from 3.05750 to 2.78990, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 3.1455 - acc: 0.4015 - val_loss: 2.7899 - val_acc: 0.5497 Epoch 7/1000 6528/6680 [============================>.] - ETA: 0s - loss: 2.9077 - acc: 0.4380Epoch 00006: val_loss improved from 2.78990 to 2.55772, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 2.9062 - acc: 0.4383 - val_loss: 2.5577 - val_acc: 0.5820 Epoch 8/1000 6656/6680 [============================>.] - ETA: 0s - loss: 2.7214 - acc: 0.4633Epoch 00007: val_loss improved from 2.55772 to 2.36478, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 2.7212 - acc: 0.4635 - val_loss: 2.3648 - val_acc: 0.5916 Epoch 9/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 2.5427 - acc: 0.4964Epoch 00008: val_loss improved from 2.36478 to 2.19391, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 2.5346 - acc: 0.4988 - val_loss: 2.1939 - val_acc: 0.6168 Epoch 10/1000 6528/6680 [============================>.] - ETA: 0s - loss: 2.3779 - acc: 0.5185Epoch 00009: val_loss improved from 2.19391 to 2.05019, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 2.3763 - acc: 0.5177 - val_loss: 2.0502 - val_acc: 0.6335 Epoch 11/1000 6528/6680 [============================>.] - ETA: 0s - loss: 2.2402 - acc: 0.5458Epoch 00010: val_loss improved from 2.05019 to 1.92414, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 2.2390 - acc: 0.5466 - val_loss: 1.9241 - val_acc: 0.6407 Epoch 12/1000 6528/6680 [============================>.] - ETA: 0s - loss: 2.1374 - acc: 0.5476Epoch 00011: val_loss improved from 1.92414 to 1.81701, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 2.1356 - acc: 0.5478 - val_loss: 1.8170 - val_acc: 0.6491 Epoch 13/1000 6656/6680 [============================>.] - ETA: 0s - loss: 2.0230 - acc: 0.5654Epoch 00012: val_loss improved from 1.81701 to 1.72538, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 2.0228 - acc: 0.5653 - val_loss: 1.7254 - val_acc: 0.6611 Epoch 14/1000 6656/6680 [============================>.] - ETA: 0s - loss: 1.9516 - acc: 0.5825Epoch 00013: val_loss improved from 1.72538 to 1.64417, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.9503 - acc: 0.5832 - val_loss: 1.6442 - val_acc: 0.6671 Epoch 15/1000 6528/6680 [============================>.] - ETA: 0s - loss: 1.8625 - acc: 0.5950Epoch 00014: val_loss improved from 1.64417 to 1.57230, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.8602 - acc: 0.5949 - val_loss: 1.5723 - val_acc: 0.6814 Epoch 16/1000 6528/6680 [============================>.] - ETA: 0s - loss: 1.7853 - acc: 0.6059Epoch 00015: val_loss improved from 1.57230 to 1.50931, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.7846 - acc: 0.6057 - val_loss: 1.5093 - val_acc: 0.6886 Epoch 17/1000 6656/6680 [============================>.] - ETA: 0s - loss: 1.7156 - acc: 0.6103Epoch 00016: val_loss improved from 1.50931 to 1.45124, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.7155 - acc: 0.6100 - val_loss: 1.4512 - val_acc: 0.6934 Epoch 18/1000 6528/6680 [============================>.] - ETA: 0s - loss: 1.6575 - acc: 0.6229Epoch 00017: val_loss improved from 1.45124 to 1.40169, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.6555 - acc: 0.6247 - val_loss: 1.4017 - val_acc: 0.6994 Epoch 19/1000 6656/6680 [============================>.] - ETA: 0s - loss: 1.5990 - acc: 0.6291Epoch 00018: val_loss improved from 1.40169 to 1.35721, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.5987 - acc: 0.6287 - val_loss: 1.3572 - val_acc: 0.7078 Epoch 20/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 1.5724 - acc: 0.6309Epoch 00019: val_loss improved from 1.35721 to 1.31896, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.5679 - acc: 0.6320 - val_loss: 1.3190 - val_acc: 0.7126 Epoch 21/1000 6656/6680 [============================>.] - ETA: 0s - loss: 1.5137 - acc: 0.6475Epoch 00020: val_loss improved from 1.31896 to 1.28361, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.5142 - acc: 0.6472 - val_loss: 1.2836 - val_acc: 0.7138 Epoch 22/1000 6656/6680 [============================>.] - ETA: 0s - loss: 1.4672 - acc: 0.6496Epoch 00021: val_loss improved from 1.28361 to 1.24877, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.4671 - acc: 0.6497 - val_loss: 1.2488 - val_acc: 0.7222 Epoch 23/1000 6528/6680 [============================>.] - ETA: 0s - loss: 1.4299 - acc: 0.6615Epoch 00022: val_loss improved from 1.24877 to 1.21811, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.4328 - acc: 0.6609 - val_loss: 1.2181 - val_acc: 0.7269 Epoch 24/1000 6528/6680 [============================>.] - ETA: 0s - loss: 1.3902 - acc: 0.6708Epoch 00023: val_loss improved from 1.21811 to 1.18950, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.3891 - acc: 0.6713 - val_loss: 1.1895 - val_acc: 0.7305 Epoch 25/1000 6656/6680 [============================>.] - ETA: 0s - loss: 1.3576 - acc: 0.6713Epoch 00024: val_loss improved from 1.18950 to 1.16404, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.3563 - acc: 0.6719 - val_loss: 1.1640 - val_acc: 0.7305 Epoch 26/1000 6528/6680 [============================>.] - ETA: 0s - loss: 1.3271 - acc: 0.6838Epoch 00025: val_loss improved from 1.16404 to 1.13834, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.3253 - acc: 0.6843 - val_loss: 1.1383 - val_acc: 0.7341 Epoch 27/1000 6656/6680 [============================>.] - ETA: 0s - loss: 1.3011 - acc: 0.6840Epoch 00026: val_loss improved from 1.13834 to 1.11611, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.3020 - acc: 0.6837 - val_loss: 1.1161 - val_acc: 0.7449 Epoch 28/1000 6528/6680 [============================>.] - ETA: 0s - loss: 1.2830 - acc: 0.6782Epoch 00027: val_loss improved from 1.11611 to 1.09433, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.2829 - acc: 0.6787 - val_loss: 1.0943 - val_acc: 0.7437 Epoch 29/1000 6528/6680 [============================>.] - ETA: 0s - loss: 1.2471 - acc: 0.6900Epoch 00028: val_loss improved from 1.09433 to 1.07948, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.2497 - acc: 0.6891 - val_loss: 1.0795 - val_acc: 0.7401 Epoch 30/1000 6656/6680 [============================>.] - ETA: 0s - loss: 1.2193 - acc: 0.6982Epoch 00029: val_loss improved from 1.07948 to 1.05915, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.2187 - acc: 0.6984 - val_loss: 1.0592 - val_acc: 0.7437 Epoch 31/1000 6656/6680 [============================>.] - ETA: 0s - loss: 1.2003 - acc: 0.7030Epoch 00030: val_loss improved from 1.05915 to 1.04278, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.1990 - acc: 0.7036 - val_loss: 1.0428 - val_acc: 0.7485 Epoch 32/1000 6528/6680 [============================>.] - ETA: 0s - loss: 1.1835 - acc: 0.7108Epoch 00031: val_loss improved from 1.04278 to 1.02782, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.1820 - acc: 0.7106 - val_loss: 1.0278 - val_acc: 0.7521 Epoch 33/1000 6656/6680 [============================>.] - ETA: 0s - loss: 1.1745 - acc: 0.7043Epoch 00032: val_loss improved from 1.02782 to 1.01393, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.1748 - acc: 0.7046 - val_loss: 1.0139 - val_acc: 0.7509 Epoch 34/1000 6528/6680 [============================>.] - ETA: 0s - loss: 1.1365 - acc: 0.7200Epoch 00033: val_loss improved from 1.01393 to 0.99925, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.1385 - acc: 0.7192 - val_loss: 0.9992 - val_acc: 0.7557 Epoch 35/1000 6656/6680 [============================>.] - ETA: 0s - loss: 1.1295 - acc: 0.7151Epoch 00034: val_loss improved from 0.99925 to 0.98697, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.1305 - acc: 0.7145 - val_loss: 0.9870 - val_acc: 0.7545 Epoch 36/1000 6528/6680 [============================>.] - ETA: 0s - loss: 1.0992 - acc: 0.7175Epoch 00035: val_loss improved from 0.98697 to 0.97532, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.1037 - acc: 0.7160 - val_loss: 0.9753 - val_acc: 0.7557 Epoch 37/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 1.0861 - acc: 0.7288Epoch 00036: val_loss improved from 0.97532 to 0.96274, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.0817 - acc: 0.7304 - val_loss: 0.9627 - val_acc: 0.7593 Epoch 38/1000 6528/6680 [============================>.] - ETA: 0s - loss: 1.0708 - acc: 0.7324Epoch 00037: val_loss improved from 0.96274 to 0.95041, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.0711 - acc: 0.7316 - val_loss: 0.9504 - val_acc: 0.7605 Epoch 39/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 1.0309 - acc: 0.7438Epoch 00038: val_loss improved from 0.95041 to 0.94194, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.0372 - acc: 0.7409 - val_loss: 0.9419 - val_acc: 0.7725 Epoch 40/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 1.0527 - acc: 0.7239Epoch 00039: val_loss improved from 0.94194 to 0.92994, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.0486 - acc: 0.7254 - val_loss: 0.9299 - val_acc: 0.7677 Epoch 41/1000 6528/6680 [============================>.] - ETA: 0s - loss: 1.0282 - acc: 0.7394Epoch 00040: val_loss improved from 0.92994 to 0.92279, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 1.0284 - acc: 0.7398 - val_loss: 0.9228 - val_acc: 0.7725 Epoch 42/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.9959 - acc: 0.7488Epoch 00041: val_loss improved from 0.92279 to 0.91296, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.9956 - acc: 0.7487 - val_loss: 0.9130 - val_acc: 0.7689 Epoch 43/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.9880 - acc: 0.7433Epoch 00042: val_loss improved from 0.91296 to 0.90430, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.9898 - acc: 0.7425 - val_loss: 0.9043 - val_acc: 0.7737 Epoch 44/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.9868 - acc: 0.7494Epoch 00043: val_loss improved from 0.90430 to 0.89500, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.9867 - acc: 0.7491 - val_loss: 0.8950 - val_acc: 0.7784 Epoch 45/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.9661 - acc: 0.7548Epoch 00044: val_loss improved from 0.89500 to 0.88817, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.9706 - acc: 0.7531 - val_loss: 0.8882 - val_acc: 0.7784 Epoch 46/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.9567 - acc: 0.7575Epoch 00045: val_loss improved from 0.88817 to 0.88019, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.9595 - acc: 0.7558 - val_loss: 0.8802 - val_acc: 0.7772 Epoch 47/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.9363 - acc: 0.7560Epoch 00046: val_loss improved from 0.88019 to 0.87228, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.9427 - acc: 0.7542 - val_loss: 0.8723 - val_acc: 0.7844 Epoch 48/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.9420 - acc: 0.7560Epoch 00047: val_loss improved from 0.87228 to 0.86599, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.9416 - acc: 0.7563 - val_loss: 0.8660 - val_acc: 0.7856 Epoch 49/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.9237 - acc: 0.7612Epoch 00048: val_loss improved from 0.86599 to 0.85776, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.9280 - acc: 0.7602 - val_loss: 0.8578 - val_acc: 0.7856 Epoch 50/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.9150 - acc: 0.7640Epoch 00049: val_loss improved from 0.85776 to 0.85231, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.9145 - acc: 0.7641 - val_loss: 0.8523 - val_acc: 0.7880 Epoch 51/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.9106 - acc: 0.7650Epoch 00050: val_loss improved from 0.85231 to 0.84660, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.9104 - acc: 0.7650 - val_loss: 0.8466 - val_acc: 0.7832 Epoch 52/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.8871 - acc: 0.7722Epoch 00051: val_loss improved from 0.84660 to 0.84121, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.8867 - acc: 0.7722 - val_loss: 0.8412 - val_acc: 0.7892 Epoch 53/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.8744 - acc: 0.7737Epoch 00052: val_loss improved from 0.84121 to 0.83382, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.8749 - acc: 0.7737 - val_loss: 0.8338 - val_acc: 0.7832 Epoch 54/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.8679 - acc: 0.7770Epoch 00053: val_loss improved from 0.83382 to 0.82854, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.8672 - acc: 0.7772 - val_loss: 0.8285 - val_acc: 0.7868 Epoch 55/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.8657 - acc: 0.7752Epoch 00054: val_loss improved from 0.82854 to 0.82467, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.8660 - acc: 0.7746 - val_loss: 0.8247 - val_acc: 0.7856 Epoch 56/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.8753 - acc: 0.7661Epoch 00055: val_loss improved from 0.82467 to 0.81839, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.8774 - acc: 0.7654 - val_loss: 0.8184 - val_acc: 0.7916 Epoch 57/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.8592 - acc: 0.7743Epoch 00056: val_loss improved from 0.81839 to 0.81244, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.8589 - acc: 0.7746 - val_loss: 0.8124 - val_acc: 0.7928 Epoch 58/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.8484 - acc: 0.7785Epoch 00057: val_loss improved from 0.81244 to 0.80785, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.8489 - acc: 0.7778 - val_loss: 0.8079 - val_acc: 0.7928 Epoch 59/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.8298 - acc: 0.7806Epoch 00058: val_loss improved from 0.80785 to 0.80211, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.8304 - acc: 0.7804 - val_loss: 0.8021 - val_acc: 0.7940 Epoch 60/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.8363 - acc: 0.7864Epoch 00059: val_loss improved from 0.80211 to 0.79800, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.8346 - acc: 0.7867 - val_loss: 0.7980 - val_acc: 0.7904 Epoch 61/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.8192 - acc: 0.7838Epoch 00060: val_loss improved from 0.79800 to 0.79283, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.8181 - acc: 0.7840 - val_loss: 0.7928 - val_acc: 0.7976 Epoch 62/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.8219 - acc: 0.7831Epoch 00061: val_loss improved from 0.79283 to 0.78762, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.8214 - acc: 0.7829 - val_loss: 0.7876 - val_acc: 0.7952 Epoch 63/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.8190 - acc: 0.7855Epoch 00062: val_loss improved from 0.78762 to 0.78394, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.8175 - acc: 0.7867 - val_loss: 0.7839 - val_acc: 0.7940 Epoch 64/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.7951 - acc: 0.7924Epoch 00063: val_loss improved from 0.78394 to 0.77884, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7955 - acc: 0.7913 - val_loss: 0.7788 - val_acc: 0.7916 Epoch 65/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.7937 - acc: 0.7913Epoch 00064: val_loss improved from 0.77884 to 0.77534, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7937 - acc: 0.7913 - val_loss: 0.7753 - val_acc: 0.7952 Epoch 66/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.8016 - acc: 0.7891Epoch 00065: val_loss improved from 0.77534 to 0.77235, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.8010 - acc: 0.7891 - val_loss: 0.7723 - val_acc: 0.7988 Epoch 67/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.7851 - acc: 0.7911Epoch 00066: val_loss improved from 0.77235 to 0.76798, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7845 - acc: 0.7904 - val_loss: 0.7680 - val_acc: 0.7988 Epoch 68/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.7615 - acc: 0.8000Epoch 00067: val_loss improved from 0.76798 to 0.76446, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7626 - acc: 0.7997 - val_loss: 0.7645 - val_acc: 0.8012 Epoch 69/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.7781 - acc: 0.7931Epoch 00068: val_loss improved from 0.76446 to 0.76174, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7762 - acc: 0.7940 - val_loss: 0.7617 - val_acc: 0.8048 Epoch 70/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.7678 - acc: 0.7952- ETA: 1s - loss: 0Epoch 00069: val_loss improved from 0.76174 to 0.75976, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7672 - acc: 0.7955 - val_loss: 0.7598 - val_acc: 0.8036 Epoch 71/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.7631 - acc: 0.7976Epoch 00070: val_loss improved from 0.75976 to 0.75560, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7625 - acc: 0.7979 - val_loss: 0.7556 - val_acc: 0.7964 Epoch 72/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.7482 - acc: 0.8007Epoch 00071: val_loss improved from 0.75560 to 0.75190, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7496 - acc: 0.8001 - val_loss: 0.7519 - val_acc: 0.8012 Epoch 73/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.7409 - acc: 0.8094Epoch 00072: val_loss improved from 0.75190 to 0.74863, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7461 - acc: 0.8072 - val_loss: 0.7486 - val_acc: 0.8024 Epoch 74/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.7364 - acc: 0.8067Epoch 00073: val_loss improved from 0.74863 to 0.74671, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7392 - acc: 0.8057 - val_loss: 0.7467 - val_acc: 0.7976 Epoch 75/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.7298 - acc: 0.8054Epoch 00074: val_loss improved from 0.74671 to 0.74212, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7302 - acc: 0.8052 - val_loss: 0.7421 - val_acc: 0.8012 Epoch 76/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.7258 - acc: 0.8093Epoch 00075: val_loss improved from 0.74212 to 0.73975, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7268 - acc: 0.8090 - val_loss: 0.7398 - val_acc: 0.8024 Epoch 77/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.7240 - acc: 0.8051Epoch 00076: val_loss improved from 0.73975 to 0.73647, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7233 - acc: 0.8051 - val_loss: 0.7365 - val_acc: 0.7988 Epoch 78/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.7224 - acc: 0.8005Epoch 00077: val_loss improved from 0.73647 to 0.73296, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7223 - acc: 0.8004 - val_loss: 0.7330 - val_acc: 0.8024 Epoch 79/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.7245 - acc: 0.8104Epoch 00078: val_loss improved from 0.73296 to 0.73001, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7235 - acc: 0.8103 - val_loss: 0.7300 - val_acc: 0.8024 Epoch 80/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.7012 - acc: 0.8173Epoch 00079: val_loss improved from 0.73001 to 0.72830, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7007 - acc: 0.8174 - val_loss: 0.7283 - val_acc: 0.8048 Epoch 81/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.7047 - acc: 0.8154Epoch 00080: val_loss improved from 0.72830 to 0.72457, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7047 - acc: 0.8156 - val_loss: 0.7246 - val_acc: 0.8060 Epoch 82/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.6979 - acc: 0.8123Epoch 00081: val_loss improved from 0.72457 to 0.72274, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6975 - acc: 0.8126 - val_loss: 0.7227 - val_acc: 0.8036 Epoch 83/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.7038 - acc: 0.8130Epoch 00082: val_loss improved from 0.72274 to 0.72001, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.7045 - acc: 0.8126 - val_loss: 0.7200 - val_acc: 0.8024 Epoch 84/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.7015 - acc: 0.8174- ETA: 0s - loss: 0.7035 - acc:Epoch 00083: val_loss improved from 0.72001 to 0.71803, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6994 - acc: 0.8184 - val_loss: 0.7180 - val_acc: 0.8036 Epoch 85/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.6808 - acc: 0.8202Epoch 00084: val_loss improved from 0.71803 to 0.71619, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6817 - acc: 0.8199 - val_loss: 0.7162 - val_acc: 0.8024 Epoch 86/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.6827 - acc: 0.8145Epoch 00085: val_loss improved from 0.71619 to 0.71417, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6828 - acc: 0.8144 - val_loss: 0.7142 - val_acc: 0.8024 Epoch 87/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.6839 - acc: 0.8187Epoch 00086: val_loss improved from 0.71417 to 0.71174, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6833 - acc: 0.8178 - val_loss: 0.7117 - val_acc: 0.8048 Epoch 88/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.6827 - acc: 0.8110Epoch 00087: val_loss improved from 0.71174 to 0.70841, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6835 - acc: 0.8106 - val_loss: 0.7084 - val_acc: 0.8036 Epoch 89/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.6697 - acc: 0.8220Epoch 00088: val_loss improved from 0.70841 to 0.70567, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6686 - acc: 0.8229 - val_loss: 0.7057 - val_acc: 0.8084 Epoch 90/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.6641 - acc: 0.8227Epoch 00089: val_loss improved from 0.70567 to 0.70334, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6632 - acc: 0.8231 - val_loss: 0.7033 - val_acc: 0.8120 Epoch 91/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.6588 - acc: 0.8247Epoch 00090: val_loss improved from 0.70334 to 0.70260, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6582 - acc: 0.8249 - val_loss: 0.7026 - val_acc: 0.8120 Epoch 92/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.6660 - acc: 0.8226Epoch 00091: val_loss improved from 0.70260 to 0.69955, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6632 - acc: 0.8229 - val_loss: 0.6996 - val_acc: 0.8108 Epoch 93/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.6556 - acc: 0.8212Epoch 00092: val_loss improved from 0.69955 to 0.69910, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6539 - acc: 0.8219 - val_loss: 0.6991 - val_acc: 0.8108 Epoch 94/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.6506 - acc: 0.8291Epoch 00093: val_loss improved from 0.69910 to 0.69601, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6510 - acc: 0.8287 - val_loss: 0.6960 - val_acc: 0.8108 Epoch 95/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.6508 - acc: 0.8233Epoch 00094: val_loss improved from 0.69601 to 0.69346, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6506 - acc: 0.8231 - val_loss: 0.6935 - val_acc: 0.8132 Epoch 96/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.6492 - acc: 0.8312Epoch 00095: val_loss improved from 0.69346 to 0.69132, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6486 - acc: 0.8302 - val_loss: 0.6913 - val_acc: 0.8132 Epoch 97/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.6328 - acc: 0.8319Epoch 00096: val_loss improved from 0.69132 to 0.69035, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6345 - acc: 0.8317 - val_loss: 0.6904 - val_acc: 0.8156 Epoch 98/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.6315 - acc: 0.8347Epoch 00097: val_loss improved from 0.69035 to 0.68886, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6315 - acc: 0.8347 - val_loss: 0.6889 - val_acc: 0.8132 Epoch 99/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.6268 - acc: 0.8361Epoch 00098: val_loss improved from 0.68886 to 0.68629, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6263 - acc: 0.8362 - val_loss: 0.6863 - val_acc: 0.8156 Epoch 100/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.6206 - acc: 0.8335Epoch 00099: val_loss improved from 0.68629 to 0.68416, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6202 - acc: 0.8346 - val_loss: 0.6842 - val_acc: 0.8144 Epoch 101/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.6261 - acc: 0.8304Epoch 00100: val_loss improved from 0.68416 to 0.68279, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6260 - acc: 0.8310 - val_loss: 0.6828 - val_acc: 0.8168 Epoch 102/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.6278 - acc: 0.8289Epoch 00101: val_loss improved from 0.68279 to 0.68075, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6262 - acc: 0.8293 - val_loss: 0.6808 - val_acc: 0.8144 Epoch 103/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.6273 - acc: 0.8343Epoch 00102: val_loss improved from 0.68075 to 0.67985, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6260 - acc: 0.8347 - val_loss: 0.6799 - val_acc: 0.8180 Epoch 104/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.6089 - acc: 0.8329Epoch 00103: val_loss improved from 0.67985 to 0.67838, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6086 - acc: 0.8329 - val_loss: 0.6784 - val_acc: 0.8144 Epoch 105/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.6021 - acc: 0.8369Epoch 00104: val_loss improved from 0.67838 to 0.67571, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6042 - acc: 0.8373 - val_loss: 0.6757 - val_acc: 0.8156 Epoch 106/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.6049 - acc: 0.8456Epoch 00105: val_loss improved from 0.67571 to 0.67428, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6063 - acc: 0.8442 - val_loss: 0.6743 - val_acc: 0.8180 Epoch 107/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.6003 - acc: 0.8388Epoch 00106: val_loss improved from 0.67428 to 0.67329, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5961 - acc: 0.8398 - val_loss: 0.6733 - val_acc: 0.8204 Epoch 108/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.6181 - acc: 0.8338Epoch 00107: val_loss improved from 0.67329 to 0.67207, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.6204 - acc: 0.8329 - val_loss: 0.6721 - val_acc: 0.8132 Epoch 109/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5962 - acc: 0.8361Epoch 00108: val_loss improved from 0.67207 to 0.67020, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5949 - acc: 0.8367 - val_loss: 0.6702 - val_acc: 0.8180 Epoch 110/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5857 - acc: 0.8421Epoch 00109: val_loss improved from 0.67020 to 0.66917, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5860 - acc: 0.8418 - val_loss: 0.6692 - val_acc: 0.8168 Epoch 111/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5916 - acc: 0.8397Epoch 00110: val_loss improved from 0.66917 to 0.66770, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5912 - acc: 0.8398 - val_loss: 0.6677 - val_acc: 0.8168 Epoch 112/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.5876 - acc: 0.8438Epoch 00111: val_loss improved from 0.66770 to 0.66619, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5878 - acc: 0.8437 - val_loss: 0.6662 - val_acc: 0.8168 Epoch 113/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5779 - acc: 0.8445Epoch 00112: val_loss improved from 0.66619 to 0.66590, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5789 - acc: 0.8439 - val_loss: 0.6659 - val_acc: 0.8192 Epoch 114/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5831 - acc: 0.8425Epoch 00113: val_loss improved from 0.66590 to 0.66424, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5827 - acc: 0.8425 - val_loss: 0.6642 - val_acc: 0.8228 Epoch 115/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.5784 - acc: 0.8427Epoch 00114: val_loss improved from 0.66424 to 0.66309, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5780 - acc: 0.8434 - val_loss: 0.6631 - val_acc: 0.8204 Epoch 116/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5848 - acc: 0.8409Epoch 00115: val_loss improved from 0.66309 to 0.66185, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5843 - acc: 0.8410 - val_loss: 0.6619 - val_acc: 0.8168 Epoch 117/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5709 - acc: 0.8422Epoch 00116: val_loss improved from 0.66185 to 0.65927, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5707 - acc: 0.8424 - val_loss: 0.6593 - val_acc: 0.8180 Epoch 118/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5671 - acc: 0.8465Epoch 00117: val_loss improved from 0.65927 to 0.65800, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5677 - acc: 0.8461 - val_loss: 0.6580 - val_acc: 0.8228 Epoch 119/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.5655 - acc: 0.8471Epoch 00118: val_loss improved from 0.65800 to 0.65736, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5679 - acc: 0.8466 - val_loss: 0.6574 - val_acc: 0.8228 Epoch 120/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.5561 - acc: 0.8530Epoch 00119: val_loss improved from 0.65736 to 0.65503, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5571 - acc: 0.8512 - val_loss: 0.6550 - val_acc: 0.8228 Epoch 121/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.5495 - acc: 0.8513Epoch 00120: val_loss improved from 0.65503 to 0.65298, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5517 - acc: 0.8503 - val_loss: 0.6530 - val_acc: 0.8216 Epoch 122/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.5600 - acc: 0.8530Epoch 00121: val_loss improved from 0.65298 to 0.65206, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5617 - acc: 0.8534 - val_loss: 0.6521 - val_acc: 0.8216 Epoch 123/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5613 - acc: 0.8422Epoch 00122: val_loss improved from 0.65206 to 0.65149, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5612 - acc: 0.8421 - val_loss: 0.6515 - val_acc: 0.8228 Epoch 124/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.5643 - acc: 0.8494Epoch 00123: val_loss improved from 0.65149 to 0.65029, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5586 - acc: 0.8516 - val_loss: 0.6503 - val_acc: 0.8216 Epoch 125/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5608 - acc: 0.8456Epoch 00124: val_loss improved from 0.65029 to 0.64979, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5611 - acc: 0.8451 - val_loss: 0.6498 - val_acc: 0.8240 Epoch 126/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.5516 - acc: 0.8505Epoch 00125: val_loss improved from 0.64979 to 0.64684, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5521 - acc: 0.8494 - val_loss: 0.6468 - val_acc: 0.8204 Epoch 127/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.5495 - acc: 0.8490Epoch 00126: val_loss improved from 0.64684 to 0.64615, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5471 - acc: 0.8493 - val_loss: 0.6462 - val_acc: 0.8240 Epoch 128/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5518 - acc: 0.8495Epoch 00127: val_loss improved from 0.64615 to 0.64427, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5518 - acc: 0.8494 - val_loss: 0.6443 - val_acc: 0.8251 Epoch 129/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5479 - acc: 0.8520Epoch 00128: val_loss improved from 0.64427 to 0.64365, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5482 - acc: 0.8519 - val_loss: 0.6437 - val_acc: 0.8251 Epoch 130/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.5411 - acc: 0.8568Epoch 00129: val_loss improved from 0.64365 to 0.64274, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5419 - acc: 0.8564 - val_loss: 0.6427 - val_acc: 0.8275 Epoch 131/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.5552 - acc: 0.8472Epoch 00130: val_loss improved from 0.64274 to 0.64132, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5520 - acc: 0.8485 - val_loss: 0.6413 - val_acc: 0.8311 Epoch 132/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5436 - acc: 0.8573Epoch 00131: val_loss improved from 0.64132 to 0.64096, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5428 - acc: 0.8576 - val_loss: 0.6410 - val_acc: 0.8311 Epoch 133/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.5394 - acc: 0.8595Epoch 00132: val_loss improved from 0.64096 to 0.63984, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5394 - acc: 0.8594 - val_loss: 0.6398 - val_acc: 0.8311 Epoch 134/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5305 - acc: 0.8580Epoch 00133: val_loss improved from 0.63984 to 0.63905, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5307 - acc: 0.8579 - val_loss: 0.6390 - val_acc: 0.8311 Epoch 135/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.5318 - acc: 0.8575Epoch 00134: val_loss improved from 0.63905 to 0.63798, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5310 - acc: 0.8582 - val_loss: 0.6380 - val_acc: 0.8299 Epoch 136/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.5338 - acc: 0.8566Epoch 00135: val_loss improved from 0.63798 to 0.63673, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5317 - acc: 0.8573 - val_loss: 0.6367 - val_acc: 0.8287 Epoch 137/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.5269 - acc: 0.8583Epoch 00136: val_loss improved from 0.63673 to 0.63631, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5261 - acc: 0.8581 - val_loss: 0.6363 - val_acc: 0.8287 Epoch 138/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5375 - acc: 0.8558Epoch 00137: val_loss improved from 0.63631 to 0.63528, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5379 - acc: 0.8555 - val_loss: 0.6353 - val_acc: 0.8275 Epoch 139/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5247 - acc: 0.8531Epoch 00138: val_loss improved from 0.63528 to 0.63410, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5244 - acc: 0.8533 - val_loss: 0.6341 - val_acc: 0.8263 Epoch 140/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.5196 - acc: 0.8560Epoch 00139: val_loss improved from 0.63410 to 0.63325, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5167 - acc: 0.8569 - val_loss: 0.6333 - val_acc: 0.8240 Epoch 141/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.5210 - acc: 0.8636Epoch 00140: val_loss improved from 0.63325 to 0.63188, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5204 - acc: 0.8636 - val_loss: 0.6319 - val_acc: 0.8323 Epoch 142/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.5198 - acc: 0.8549Epoch 00141: val_loss improved from 0.63188 to 0.63134, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5192 - acc: 0.8549 - val_loss: 0.6313 - val_acc: 0.8287 Epoch 143/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.5118 - acc: 0.8585Epoch 00142: val_loss improved from 0.63134 to 0.62982, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5120 - acc: 0.8579 - val_loss: 0.6298 - val_acc: 0.8323 Epoch 144/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.5104 - acc: 0.8612Epoch 00143: val_loss improved from 0.62982 to 0.62878, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5087 - acc: 0.8621 - val_loss: 0.6288 - val_acc: 0.8275 Epoch 145/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.5056 - acc: 0.8666Epoch 00144: val_loss improved from 0.62878 to 0.62828, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5054 - acc: 0.8675 - val_loss: 0.6283 - val_acc: 0.8263 Epoch 146/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5102 - acc: 0.8685Epoch 00145: val_loss improved from 0.62828 to 0.62593, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5113 - acc: 0.8681 - val_loss: 0.6259 - val_acc: 0.8263 Epoch 147/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5089 - acc: 0.8619Epoch 00146: val_loss improved from 0.62593 to 0.62581, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5095 - acc: 0.8617 - val_loss: 0.6258 - val_acc: 0.8251 Epoch 148/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.5038 - acc: 0.8647Epoch 00147: val_loss improved from 0.62581 to 0.62490, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5035 - acc: 0.8647 - val_loss: 0.6249 - val_acc: 0.8263 Epoch 149/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.5015 - acc: 0.8666Epoch 00148: val_loss improved from 0.62490 to 0.62459, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5032 - acc: 0.8659 - val_loss: 0.6246 - val_acc: 0.8287 Epoch 150/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4934 - acc: 0.8718Epoch 00149: val_loss improved from 0.62459 to 0.62347, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4927 - acc: 0.8719 - val_loss: 0.6235 - val_acc: 0.8335 Epoch 151/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5040 - acc: 0.8658Epoch 00150: val_loss improved from 0.62347 to 0.62238, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.5031 - acc: 0.8660 - val_loss: 0.6224 - val_acc: 0.8299 Epoch 152/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.5001 - acc: 0.8639Epoch 00151: val_loss improved from 0.62238 to 0.62196, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4993 - acc: 0.8642 - val_loss: 0.6220 - val_acc: 0.8311 Epoch 153/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4908 - acc: 0.8660Epoch 00152: val_loss improved from 0.62196 to 0.62101, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4899 - acc: 0.8665 - val_loss: 0.6210 - val_acc: 0.8335 Epoch 154/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4733 - acc: 0.8755Epoch 00153: val_loss improved from 0.62101 to 0.62009, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4736 - acc: 0.8753 - val_loss: 0.6201 - val_acc: 0.8311 Epoch 155/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4867 - acc: 0.8672Epoch 00154: val_loss improved from 0.62009 to 0.61911, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4866 - acc: 0.8674 - val_loss: 0.6191 - val_acc: 0.8299 Epoch 156/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4959 - acc: 0.8604Epoch 00155: val_loss improved from 0.61911 to 0.61830, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4972 - acc: 0.8597 - val_loss: 0.6183 - val_acc: 0.8359 Epoch 157/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4915 - acc: 0.8675Epoch 00156: val_loss improved from 0.61830 to 0.61711, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4911 - acc: 0.8675 - val_loss: 0.6171 - val_acc: 0.8323 Epoch 158/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4779 - acc: 0.8694Epoch 00157: val_loss improved from 0.61711 to 0.61613, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4791 - acc: 0.8692 - val_loss: 0.6161 - val_acc: 0.8311 Epoch 159/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4867 - acc: 0.8704Epoch 00158: val_loss improved from 0.61613 to 0.61579, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4863 - acc: 0.8702 - val_loss: 0.6158 - val_acc: 0.8323 Epoch 160/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4668 - acc: 0.8784Epoch 00159: val_loss improved from 0.61579 to 0.61511, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4689 - acc: 0.8774 - val_loss: 0.6151 - val_acc: 0.8335 Epoch 161/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4816 - acc: 0.8729Epoch 00160: val_loss improved from 0.61511 to 0.61301, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4799 - acc: 0.8738 - val_loss: 0.6130 - val_acc: 0.8347 Epoch 162/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4801 - acc: 0.8686Epoch 00161: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.4816 - acc: 0.8678 - val_loss: 0.6133 - val_acc: 0.8347 Epoch 163/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4823 - acc: 0.8738Epoch 00162: val_loss improved from 0.61301 to 0.61168, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4828 - acc: 0.8737 - val_loss: 0.6117 - val_acc: 0.8359 Epoch 164/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.4656 - acc: 0.8806Epoch 00163: val_loss improved from 0.61168 to 0.61082, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4631 - acc: 0.8820 - val_loss: 0.6108 - val_acc: 0.8383 Epoch 165/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4802 - acc: 0.8672Epoch 00164: val_loss improved from 0.61082 to 0.61031, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4801 - acc: 0.8674 - val_loss: 0.6103 - val_acc: 0.8371 Epoch 166/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.4702 - acc: 0.8758Epoch 00165: val_loss improved from 0.61031 to 0.60968, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4709 - acc: 0.8754 - val_loss: 0.6097 - val_acc: 0.8395 Epoch 167/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4714 - acc: 0.8755Epoch 00166: val_loss improved from 0.60968 to 0.60922, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4711 - acc: 0.8757 - val_loss: 0.6092 - val_acc: 0.8407 Epoch 168/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4726 - acc: 0.8727Epoch 00167: val_loss improved from 0.60922 to 0.60895, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4725 - acc: 0.8726 - val_loss: 0.6089 - val_acc: 0.8395 Epoch 169/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4571 - acc: 0.8747Epoch 00168: val_loss improved from 0.60895 to 0.60847, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4578 - acc: 0.8741 - val_loss: 0.6085 - val_acc: 0.8347 Epoch 170/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.4541 - acc: 0.8827Epoch 00169: val_loss improved from 0.60847 to 0.60777, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4552 - acc: 0.8814 - val_loss: 0.6078 - val_acc: 0.8371 Epoch 171/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4679 - acc: 0.8721Epoch 00170: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.4683 - acc: 0.8717 - val_loss: 0.6081 - val_acc: 0.8359 Epoch 172/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4557 - acc: 0.8780Epoch 00171: val_loss improved from 0.60777 to 0.60722, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4556 - acc: 0.8781 - val_loss: 0.6072 - val_acc: 0.8395 Epoch 173/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4470 - acc: 0.8860Epoch 00172: val_loss improved from 0.60722 to 0.60670, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4463 - acc: 0.8864 - val_loss: 0.6067 - val_acc: 0.8395 Epoch 174/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.4639 - acc: 0.8733Epoch 00173: val_loss improved from 0.60670 to 0.60521, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4620 - acc: 0.8732 - val_loss: 0.6052 - val_acc: 0.8419 Epoch 175/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4487 - acc: 0.8809Epoch 00174: val_loss improved from 0.60521 to 0.60436, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4484 - acc: 0.8808 - val_loss: 0.6044 - val_acc: 0.8395 Epoch 176/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.4362 - acc: 0.8867Epoch 00175: val_loss improved from 0.60436 to 0.60398, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4386 - acc: 0.8852 - val_loss: 0.6040 - val_acc: 0.8419 Epoch 177/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4589 - acc: 0.8752Epoch 00176: val_loss improved from 0.60398 to 0.60355, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4583 - acc: 0.8753 - val_loss: 0.6036 - val_acc: 0.8407 Epoch 178/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4515 - acc: 0.8794Epoch 00177: val_loss improved from 0.60355 to 0.60268, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4530 - acc: 0.8795 - val_loss: 0.6027 - val_acc: 0.8407 Epoch 179/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4544 - acc: 0.8762Epoch 00178: val_loss improved from 0.60268 to 0.60206, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4516 - acc: 0.8774 - val_loss: 0.6021 - val_acc: 0.8407 Epoch 180/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4399 - acc: 0.8866Epoch 00179: val_loss improved from 0.60206 to 0.60061, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4390 - acc: 0.8870 - val_loss: 0.6006 - val_acc: 0.8359 Epoch 181/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4426 - acc: 0.8816Epoch 00180: val_loss improved from 0.60061 to 0.60031, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4426 - acc: 0.8816 - val_loss: 0.6003 - val_acc: 0.8347 Epoch 182/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4355 - acc: 0.8867Epoch 00181: val_loss improved from 0.60031 to 0.59907, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4355 - acc: 0.8865 - val_loss: 0.5991 - val_acc: 0.8371 Epoch 183/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4412 - acc: 0.8810Epoch 00182: val_loss improved from 0.59907 to 0.59858, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4398 - acc: 0.8813 - val_loss: 0.5986 - val_acc: 0.8347 Epoch 184/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4462 - acc: 0.8818Epoch 00183: val_loss improved from 0.59858 to 0.59819, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4462 - acc: 0.8819 - val_loss: 0.5982 - val_acc: 0.8383 Epoch 185/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4366 - acc: 0.8819Epoch 00184: val_loss improved from 0.59819 to 0.59745, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4372 - acc: 0.8813 - val_loss: 0.5974 - val_acc: 0.8407 Epoch 186/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4410 - acc: 0.8824Epoch 00185: val_loss improved from 0.59745 to 0.59681, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4400 - acc: 0.8828 - val_loss: 0.5968 - val_acc: 0.8395 Epoch 187/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4339 - acc: 0.8820Epoch 00186: val_loss improved from 0.59681 to 0.59621, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4353 - acc: 0.8822 - val_loss: 0.5962 - val_acc: 0.8431 Epoch 188/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4477 - acc: 0.8773Epoch 00187: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.4473 - acc: 0.8769 - val_loss: 0.5964 - val_acc: 0.8407 Epoch 189/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4392 - acc: 0.8813Epoch 00188: val_loss improved from 0.59621 to 0.59581, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4391 - acc: 0.8813 - val_loss: 0.5958 - val_acc: 0.8419 Epoch 190/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4193 - acc: 0.8911Epoch 00189: val_loss improved from 0.59581 to 0.59520, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4199 - acc: 0.8910 - val_loss: 0.5952 - val_acc: 0.8407 Epoch 191/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4282 - acc: 0.8860Epoch 00190: val_loss improved from 0.59520 to 0.59453, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4282 - acc: 0.8859 - val_loss: 0.5945 - val_acc: 0.8419 Epoch 192/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4355 - acc: 0.8820Epoch 00191: val_loss improved from 0.59453 to 0.59423, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4338 - acc: 0.8826 - val_loss: 0.5942 - val_acc: 0.8371 Epoch 193/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4292 - acc: 0.8837Epoch 00192: val_loss improved from 0.59423 to 0.59359, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4271 - acc: 0.8844 - val_loss: 0.5936 - val_acc: 0.8383 Epoch 194/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4218 - acc: 0.8866Epoch 00193: val_loss improved from 0.59359 to 0.59221, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4244 - acc: 0.8858 - val_loss: 0.5922 - val_acc: 0.8407 Epoch 195/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4197 - acc: 0.8872Epoch 00194: val_loss improved from 0.59221 to 0.59077, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4200 - acc: 0.8871 - val_loss: 0.5908 - val_acc: 0.8407 Epoch 196/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4255 - acc: 0.8900Epoch 00195: val_loss improved from 0.59077 to 0.58939, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4255 - acc: 0.8897 - val_loss: 0.5894 - val_acc: 0.8419 Epoch 197/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4316 - acc: 0.8855Epoch 00196: val_loss improved from 0.58939 to 0.58935, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4316 - acc: 0.8856 - val_loss: 0.5893 - val_acc: 0.8419 Epoch 198/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4250 - acc: 0.8908Epoch 00197: val_loss improved from 0.58935 to 0.58861, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4249 - acc: 0.8910 - val_loss: 0.5886 - val_acc: 0.8431 Epoch 199/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.4275 - acc: 0.8836Epoch 00198: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.4259 - acc: 0.8843 - val_loss: 0.5887 - val_acc: 0.8407 Epoch 200/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4221 - acc: 0.8930Epoch 00199: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.4221 - acc: 0.8930 - val_loss: 0.5892 - val_acc: 0.8407 Epoch 201/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4247 - acc: 0.8866Epoch 00200: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.4253 - acc: 0.8870 - val_loss: 0.5890 - val_acc: 0.8407 Epoch 202/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4203 - acc: 0.8878Epoch 00201: val_loss improved from 0.58861 to 0.58803, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4210 - acc: 0.8874 - val_loss: 0.5880 - val_acc: 0.8383 Epoch 203/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4181 - acc: 0.8894Epoch 00202: val_loss improved from 0.58803 to 0.58770, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4181 - acc: 0.8895 - val_loss: 0.5877 - val_acc: 0.8371 Epoch 204/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4107 - acc: 0.8897Epoch 00203: val_loss improved from 0.58770 to 0.58668, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4112 - acc: 0.8897 - val_loss: 0.5867 - val_acc: 0.8371 Epoch 205/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4158 - acc: 0.8872Epoch 00204: val_loss improved from 0.58668 to 0.58567, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4161 - acc: 0.8873 - val_loss: 0.5857 - val_acc: 0.8395 Epoch 206/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4087 - acc: 0.8885Epoch 00205: val_loss improved from 0.58567 to 0.58482, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4084 - acc: 0.8888 - val_loss: 0.5848 - val_acc: 0.8383 Epoch 207/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4187 - acc: 0.8864Epoch 00206: val_loss improved from 0.58482 to 0.58437, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4184 - acc: 0.8867 - val_loss: 0.5844 - val_acc: 0.8407 Epoch 208/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3975 - acc: 0.8981Epoch 00207: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3979 - acc: 0.8987 - val_loss: 0.5849 - val_acc: 0.8419 Epoch 209/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4123 - acc: 0.8873Epoch 00208: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.4114 - acc: 0.8865 - val_loss: 0.5851 - val_acc: 0.8395 Epoch 210/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4021 - acc: 0.8900Epoch 00209: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.4031 - acc: 0.8898 - val_loss: 0.5850 - val_acc: 0.8419 Epoch 211/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4066 - acc: 0.8929Epoch 00210: val_loss improved from 0.58437 to 0.58376, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4073 - acc: 0.8928 - val_loss: 0.5838 - val_acc: 0.8455 Epoch 212/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4023 - acc: 0.8911Epoch 00211: val_loss improved from 0.58376 to 0.58321, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4021 - acc: 0.8913 - val_loss: 0.5832 - val_acc: 0.8455 Epoch 213/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3963 - acc: 0.8989Epoch 00212: val_loss improved from 0.58321 to 0.58292, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3966 - acc: 0.8988 - val_loss: 0.5829 - val_acc: 0.8443 Epoch 214/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4043 - acc: 0.8927Epoch 00213: val_loss improved from 0.58292 to 0.58200, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4045 - acc: 0.8927 - val_loss: 0.5820 - val_acc: 0.8431 Epoch 215/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4006 - acc: 0.8917Epoch 00214: val_loss improved from 0.58200 to 0.58137, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4001 - acc: 0.8918 - val_loss: 0.5814 - val_acc: 0.8395 Epoch 216/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.4055 - acc: 0.8883Epoch 00215: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.4056 - acc: 0.8883 - val_loss: 0.5818 - val_acc: 0.8431 Epoch 217/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3905 - acc: 0.8980Epoch 00216: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3901 - acc: 0.8981 - val_loss: 0.5817 - val_acc: 0.8395 Epoch 218/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3995 - acc: 0.8977Epoch 00217: val_loss improved from 0.58137 to 0.58066, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3994 - acc: 0.8975 - val_loss: 0.5807 - val_acc: 0.8359 Epoch 219/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.4124 - acc: 0.8899Epoch 00218: val_loss improved from 0.58066 to 0.58024, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.4128 - acc: 0.8898 - val_loss: 0.5802 - val_acc: 0.8407 Epoch 220/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3938 - acc: 0.8969Epoch 00219: val_loss improved from 0.58024 to 0.58004, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3930 - acc: 0.8973 - val_loss: 0.5800 - val_acc: 0.8419 Epoch 221/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3904 - acc: 0.8965Epoch 00220: val_loss improved from 0.58004 to 0.58002, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3898 - acc: 0.8967 - val_loss: 0.5800 - val_acc: 0.8407 Epoch 222/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3975 - acc: 0.8940Epoch 00221: val_loss improved from 0.58002 to 0.57899, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3976 - acc: 0.8940 - val_loss: 0.5790 - val_acc: 0.8419 Epoch 223/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3890 - acc: 0.8950Epoch 00222: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3889 - acc: 0.8951 - val_loss: 0.5791 - val_acc: 0.8455 Epoch 224/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3958 - acc: 0.8915Epoch 00223: val_loss improved from 0.57899 to 0.57823, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3971 - acc: 0.8915 - val_loss: 0.5782 - val_acc: 0.8443 Epoch 225/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3919 - acc: 0.8960Epoch 00224: val_loss improved from 0.57823 to 0.57820, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3918 - acc: 0.8960 - val_loss: 0.5782 - val_acc: 0.8395 Epoch 226/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3862 - acc: 0.9023Epoch 00225: val_loss improved from 0.57820 to 0.57663, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3869 - acc: 0.9018 - val_loss: 0.5766 - val_acc: 0.8407 Epoch 227/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3964 - acc: 0.8922Epoch 00226: val_loss improved from 0.57663 to 0.57635, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3946 - acc: 0.8933 - val_loss: 0.5763 - val_acc: 0.8383 Epoch 228/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3788 - acc: 0.8980Epoch 00227: val_loss improved from 0.57635 to 0.57604, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3803 - acc: 0.8979 - val_loss: 0.5760 - val_acc: 0.8407 Epoch 229/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3834 - acc: 0.8998Epoch 00228: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3834 - acc: 0.8997 - val_loss: 0.5761 - val_acc: 0.8443 Epoch 230/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3786 - acc: 0.9022Epoch 00229: val_loss improved from 0.57604 to 0.57592, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3783 - acc: 0.9022 - val_loss: 0.5759 - val_acc: 0.8443 Epoch 231/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3830 - acc: 0.8977Epoch 00230: val_loss improved from 0.57592 to 0.57482, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3828 - acc: 0.8978 - val_loss: 0.5748 - val_acc: 0.8419 Epoch 232/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3795 - acc: 0.9020Epoch 00231: val_loss improved from 0.57482 to 0.57435, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3791 - acc: 0.9016 - val_loss: 0.5743 - val_acc: 0.8455 Epoch 233/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3805 - acc: 0.8962Epoch 00232: val_loss improved from 0.57435 to 0.57344, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3807 - acc: 0.8960 - val_loss: 0.5734 - val_acc: 0.8419 Epoch 234/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3856 - acc: 0.8994Epoch 00233: val_loss improved from 0.57344 to 0.57215, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3883 - acc: 0.8981 - val_loss: 0.5722 - val_acc: 0.8443 Epoch 235/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3803 - acc: 0.8991Epoch 00234: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3799 - acc: 0.8993 - val_loss: 0.5722 - val_acc: 0.8455 Epoch 236/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3763 - acc: 0.9018Epoch 00235: val_loss improved from 0.57215 to 0.57178, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3774 - acc: 0.9004 - val_loss: 0.5718 - val_acc: 0.8419 Epoch 237/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3742 - acc: 0.8986Epoch 00236: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3736 - acc: 0.8987 - val_loss: 0.5724 - val_acc: 0.8419 Epoch 238/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3781 - acc: 0.9005Epoch 00237: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3776 - acc: 0.9006 - val_loss: 0.5722 - val_acc: 0.8419 Epoch 239/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3828 - acc: 0.8969Epoch 00238: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3830 - acc: 0.8969 - val_loss: 0.5722 - val_acc: 0.8407 Epoch 240/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3726 - acc: 0.9005Epoch 00239: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3726 - acc: 0.9006 - val_loss: 0.5719 - val_acc: 0.8407 Epoch 241/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3730 - acc: 0.9050Epoch 00240: val_loss improved from 0.57178 to 0.57164, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3729 - acc: 0.9051 - val_loss: 0.5716 - val_acc: 0.8431 Epoch 242/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3701 - acc: 0.9020Epoch 00241: val_loss improved from 0.57164 to 0.57089, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3691 - acc: 0.9024 - val_loss: 0.5709 - val_acc: 0.8407 Epoch 243/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3650 - acc: 0.9033Epoch 00242: val_loss improved from 0.57089 to 0.57043, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3658 - acc: 0.9024 - val_loss: 0.5704 - val_acc: 0.8431 Epoch 244/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3632 - acc: 0.9048Epoch 00243: val_loss improved from 0.57043 to 0.57032, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3661 - acc: 0.9039 - val_loss: 0.5703 - val_acc: 0.8431 Epoch 245/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3680 - acc: 0.8996Epoch 00244: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3673 - acc: 0.8999 - val_loss: 0.5704 - val_acc: 0.8431 Epoch 246/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3629 - acc: 0.9021Epoch 00245: val_loss improved from 0.57032 to 0.56960, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3633 - acc: 0.9021 - val_loss: 0.5696 - val_acc: 0.8443 Epoch 247/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3658 - acc: 0.9041Epoch 00246: val_loss improved from 0.56960 to 0.56873, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3685 - acc: 0.9033 - val_loss: 0.5687 - val_acc: 0.8431 Epoch 248/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3664 - acc: 0.9008Epoch 00247: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3662 - acc: 0.9010 - val_loss: 0.5689 - val_acc: 0.8431 Epoch 249/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3583 - acc: 0.9062Epoch 00248: val_loss improved from 0.56873 to 0.56866, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3579 - acc: 0.9064 - val_loss: 0.5687 - val_acc: 0.8431 Epoch 250/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3726 - acc: 0.9013Epoch 00249: val_loss improved from 0.56866 to 0.56789, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3723 - acc: 0.9012 - val_loss: 0.5679 - val_acc: 0.8407 Epoch 251/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3674 - acc: 0.9052Epoch 00250: val_loss improved from 0.56789 to 0.56748, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3678 - acc: 0.9049 - val_loss: 0.5675 - val_acc: 0.8431 Epoch 252/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3623 - acc: 0.9036Epoch 00251: val_loss improved from 0.56748 to 0.56708, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3621 - acc: 0.9034 - val_loss: 0.5671 - val_acc: 0.8431 Epoch 253/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3697 - acc: 0.9016Epoch 00252: val_loss improved from 0.56708 to 0.56700, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3700 - acc: 0.9015 - val_loss: 0.5670 - val_acc: 0.8455 Epoch 254/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3575 - acc: 0.9069Epoch 00253: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3557 - acc: 0.9072 - val_loss: 0.5671 - val_acc: 0.8455 Epoch 255/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3636 - acc: 0.9016Epoch 00254: val_loss improved from 0.56700 to 0.56643, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3636 - acc: 0.9018 - val_loss: 0.5664 - val_acc: 0.8443 Epoch 256/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3535 - acc: 0.9062Epoch 00255: val_loss improved from 0.56643 to 0.56596, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3538 - acc: 0.9058 - val_loss: 0.5660 - val_acc: 0.8443 Epoch 257/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3579 - acc: 0.9072Epoch 00256: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3579 - acc: 0.9070 - val_loss: 0.5661 - val_acc: 0.8419 Epoch 258/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3520 - acc: 0.9097Epoch 00257: val_loss improved from 0.56596 to 0.56526, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3527 - acc: 0.9094 - val_loss: 0.5653 - val_acc: 0.8407 Epoch 259/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3591 - acc: 0.9032Epoch 00258: val_loss improved from 0.56526 to 0.56478, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3591 - acc: 0.9033 - val_loss: 0.5648 - val_acc: 0.8395 Epoch 260/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3678 - acc: 0.9028Epoch 00259: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3678 - acc: 0.9028 - val_loss: 0.5654 - val_acc: 0.8407 Epoch 261/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3514 - acc: 0.9059Epoch 00260: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3506 - acc: 0.9066 - val_loss: 0.5656 - val_acc: 0.8407 Epoch 262/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3530 - acc: 0.9053Epoch 00261: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3529 - acc: 0.9055 - val_loss: 0.5649 - val_acc: 0.8431 Epoch 263/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3526 - acc: 0.9070- ETA: 0s - loss: 0.3536 - acc: 0.90Epoch 00262: val_loss improved from 0.56478 to 0.56422, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3536 - acc: 0.9066 - val_loss: 0.5642 - val_acc: 0.8443 Epoch 264/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3552 - acc: 0.9081Epoch 00263: val_loss improved from 0.56422 to 0.56400, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3551 - acc: 0.9082 - val_loss: 0.5640 - val_acc: 0.8455 Epoch 265/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3507 - acc: 0.9081Epoch 00264: val_loss improved from 0.56400 to 0.56365, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3499 - acc: 0.9082 - val_loss: 0.5636 - val_acc: 0.8467 Epoch 266/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3421 - acc: 0.9119Epoch 00265: val_loss improved from 0.56365 to 0.56323, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3442 - acc: 0.9106 - val_loss: 0.5632 - val_acc: 0.8431 Epoch 267/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3513 - acc: 0.9089Epoch 00266: val_loss improved from 0.56323 to 0.56265, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3512 - acc: 0.9093 - val_loss: 0.5627 - val_acc: 0.8431 Epoch 268/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3495 - acc: 0.9073Epoch 00267: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3490 - acc: 0.9076 - val_loss: 0.5628 - val_acc: 0.8443 Epoch 269/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3519 - acc: 0.9078Epoch 00268: val_loss improved from 0.56265 to 0.56234, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3502 - acc: 0.9084 - val_loss: 0.5623 - val_acc: 0.8467 Epoch 270/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3376 - acc: 0.9180Epoch 00269: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3386 - acc: 0.9177 - val_loss: 0.5624 - val_acc: 0.8455 Epoch 271/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3469 - acc: 0.9108Epoch 00270: val_loss improved from 0.56234 to 0.56224, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3467 - acc: 0.9109 - val_loss: 0.5622 - val_acc: 0.8467 Epoch 272/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3434 - acc: 0.9124Epoch 00271: val_loss improved from 0.56224 to 0.56181, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3432 - acc: 0.9126 - val_loss: 0.5618 - val_acc: 0.8455 Epoch 273/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3498 - acc: 0.9064Epoch 00272: val_loss improved from 0.56181 to 0.56174, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3487 - acc: 0.9066 - val_loss: 0.5617 - val_acc: 0.8455 Epoch 274/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3538 - acc: 0.9059Epoch 00273: val_loss improved from 0.56174 to 0.56128, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3538 - acc: 0.9061 - val_loss: 0.5613 - val_acc: 0.8443 Epoch 275/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3368 - acc: 0.9147Epoch 00274: val_loss improved from 0.56128 to 0.56028, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3371 - acc: 0.9147 - val_loss: 0.5603 - val_acc: 0.8431 Epoch 276/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3385 - acc: 0.9082Epoch 00275: val_loss improved from 0.56028 to 0.55972, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3394 - acc: 0.9078 - val_loss: 0.5597 - val_acc: 0.8395 Epoch 277/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3485 - acc: 0.9081Epoch 00276: val_loss improved from 0.55972 to 0.55906, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3489 - acc: 0.9081 - val_loss: 0.5591 - val_acc: 0.8407 Epoch 278/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3438 - acc: 0.9093Epoch 00277: val_loss improved from 0.55906 to 0.55844, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3434 - acc: 0.9094 - val_loss: 0.5584 - val_acc: 0.8431 Epoch 279/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3374 - acc: 0.9121Epoch 00278: val_loss improved from 0.55844 to 0.55811, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3370 - acc: 0.9124 - val_loss: 0.5581 - val_acc: 0.8431 Epoch 280/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3428 - acc: 0.9070Epoch 00279: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3424 - acc: 0.9072 - val_loss: 0.5583 - val_acc: 0.8419 Epoch 281/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3313 - acc: 0.9167Epoch 00280: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3323 - acc: 0.9160 - val_loss: 0.5585 - val_acc: 0.8431 Epoch 282/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3350 - acc: 0.9129Epoch 00281: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3345 - acc: 0.9130 - val_loss: 0.5581 - val_acc: 0.8467 Epoch 283/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3377 - acc: 0.9153Epoch 00282: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3352 - acc: 0.9160 - val_loss: 0.5582 - val_acc: 0.8443 Epoch 284/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3353 - acc: 0.9133Epoch 00283: val_loss improved from 0.55811 to 0.55808, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3346 - acc: 0.9135 - val_loss: 0.5581 - val_acc: 0.8455 Epoch 285/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3394 - acc: 0.9087Epoch 00284: val_loss improved from 0.55808 to 0.55783, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3397 - acc: 0.9082 - val_loss: 0.5578 - val_acc: 0.8479 Epoch 286/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3336 - acc: 0.9142Epoch 00285: val_loss improved from 0.55783 to 0.55728, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3334 - acc: 0.9145 - val_loss: 0.5573 - val_acc: 0.8479 Epoch 287/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3266 - acc: 0.9162Epoch 00286: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3260 - acc: 0.9169 - val_loss: 0.5573 - val_acc: 0.8467 Epoch 288/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3394 - acc: 0.9106Epoch 00287: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3396 - acc: 0.9103 - val_loss: 0.5575 - val_acc: 0.8467 Epoch 289/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3295 - acc: 0.9123Epoch 00288: val_loss improved from 0.55728 to 0.55636, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3294 - acc: 0.9120 - val_loss: 0.5564 - val_acc: 0.8467 Epoch 290/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3334 - acc: 0.9123Epoch 00289: val_loss improved from 0.55636 to 0.55632, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3315 - acc: 0.9132 - val_loss: 0.5563 - val_acc: 0.8467 Epoch 291/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3282 - acc: 0.9158Epoch 00290: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3260 - acc: 0.9165 - val_loss: 0.5563 - val_acc: 0.8455 Epoch 292/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3311 - acc: 0.9165Epoch 00291: val_loss improved from 0.55632 to 0.55597, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3307 - acc: 0.9165 - val_loss: 0.5560 - val_acc: 0.8443 Epoch 293/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3263 - acc: 0.9162Epoch 00292: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3260 - acc: 0.9163 - val_loss: 0.5563 - val_acc: 0.8467 Epoch 294/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3362 - acc: 0.9097Epoch 00293: val_loss improved from 0.55597 to 0.55573, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3362 - acc: 0.9097 - val_loss: 0.5557 - val_acc: 0.8467 Epoch 295/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3215 - acc: 0.9161Epoch 00294: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3201 - acc: 0.9171 - val_loss: 0.5558 - val_acc: 0.8467 Epoch 296/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3263 - acc: 0.9126Epoch 00295: val_loss improved from 0.55573 to 0.55513, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3261 - acc: 0.9127 - val_loss: 0.5551 - val_acc: 0.8467 Epoch 297/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3247 - acc: 0.9156Epoch 00296: val_loss improved from 0.55513 to 0.55491, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3248 - acc: 0.9160 - val_loss: 0.5549 - val_acc: 0.8479 Epoch 298/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3239 - acc: 0.9165Epoch 00297: val_loss improved from 0.55491 to 0.55442, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3233 - acc: 0.9168 - val_loss: 0.5544 - val_acc: 0.8491 Epoch 299/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3317 - acc: 0.9102Epoch 00298: val_loss improved from 0.55442 to 0.55391, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3322 - acc: 0.9100 - val_loss: 0.5539 - val_acc: 0.8479 Epoch 300/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3220 - acc: 0.9148Epoch 00299: val_loss improved from 0.55391 to 0.55384, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3230 - acc: 0.9147 - val_loss: 0.5538 - val_acc: 0.8479 Epoch 301/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3220 - acc: 0.9147Epoch 00300: val_loss improved from 0.55384 to 0.55330, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3216 - acc: 0.9148 - val_loss: 0.5533 - val_acc: 0.8467 Epoch 302/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3218 - acc: 0.9154Epoch 00301: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3217 - acc: 0.9153 - val_loss: 0.5535 - val_acc: 0.8455 Epoch 303/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3277 - acc: 0.9107Epoch 00302: val_loss improved from 0.55330 to 0.55310, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3263 - acc: 0.9111 - val_loss: 0.5531 - val_acc: 0.8443 Epoch 304/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3179 - acc: 0.9162Epoch 00303: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3177 - acc: 0.9162 - val_loss: 0.5539 - val_acc: 0.8455 Epoch 305/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3173 - acc: 0.9187Epoch 00304: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3173 - acc: 0.9187 - val_loss: 0.5534 - val_acc: 0.8467 Epoch 306/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3136 - acc: 0.9176Epoch 00305: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3138 - acc: 0.9172 - val_loss: 0.5533 - val_acc: 0.8467 Epoch 307/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3222 - acc: 0.9176Epoch 00306: val_loss improved from 0.55310 to 0.55304, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3212 - acc: 0.9177 - val_loss: 0.5530 - val_acc: 0.8479 Epoch 308/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3192 - acc: 0.9148Epoch 00307: val_loss improved from 0.55304 to 0.55283, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3194 - acc: 0.9148 - val_loss: 0.5528 - val_acc: 0.8467 Epoch 309/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3093 - acc: 0.9223Epoch 00308: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3093 - acc: 0.9222 - val_loss: 0.5531 - val_acc: 0.8479 Epoch 310/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3201 - acc: 0.9150Epoch 00309: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3198 - acc: 0.9150 - val_loss: 0.5532 - val_acc: 0.8479 Epoch 311/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3201 - acc: 0.9160Epoch 00310: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3196 - acc: 0.9162 - val_loss: 0.5535 - val_acc: 0.8479 Epoch 312/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3179 - acc: 0.9177Epoch 00311: val_loss improved from 0.55283 to 0.55231, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3175 - acc: 0.9180 - val_loss: 0.5523 - val_acc: 0.8467 Epoch 313/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3165 - acc: 0.9185Epoch 00312: val_loss improved from 0.55231 to 0.55143, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3178 - acc: 0.9180 - val_loss: 0.5514 - val_acc: 0.8467 Epoch 314/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3154 - acc: 0.9151Epoch 00313: val_loss improved from 0.55143 to 0.55081, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3157 - acc: 0.9148 - val_loss: 0.5508 - val_acc: 0.8467 Epoch 315/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3169 - acc: 0.9160Epoch 00314: val_loss improved from 0.55081 to 0.55006, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3173 - acc: 0.9157 - val_loss: 0.5501 - val_acc: 0.8479 Epoch 316/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3073 - acc: 0.9213Epoch 00315: val_loss improved from 0.55006 to 0.54987, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3069 - acc: 0.9214 - val_loss: 0.5499 - val_acc: 0.8479 Epoch 317/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3154 - acc: 0.9200Epoch 00316: val_loss improved from 0.54987 to 0.54944, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3171 - acc: 0.9186 - val_loss: 0.5494 - val_acc: 0.8467 Epoch 318/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3060 - acc: 0.9223Epoch 00317: val_loss improved from 0.54944 to 0.54904, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3056 - acc: 0.9226 - val_loss: 0.5490 - val_acc: 0.8467 Epoch 319/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3181 - acc: 0.9174Epoch 00318: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3177 - acc: 0.9177 - val_loss: 0.5491 - val_acc: 0.8479 Epoch 320/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3207 - acc: 0.9156Epoch 00319: val_loss improved from 0.54904 to 0.54824, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3203 - acc: 0.9157 - val_loss: 0.5482 - val_acc: 0.8503 Epoch 321/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3108 - acc: 0.9184Epoch 00320: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3103 - acc: 0.9187 - val_loss: 0.5482 - val_acc: 0.8455 Epoch 322/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3149 - acc: 0.9178Epoch 00321: val_loss improved from 0.54824 to 0.54819, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3149 - acc: 0.9178 - val_loss: 0.5482 - val_acc: 0.8455 Epoch 323/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3030 - acc: 0.9216Epoch 00322: val_loss improved from 0.54819 to 0.54802, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3036 - acc: 0.9217 - val_loss: 0.5480 - val_acc: 0.8467 Epoch 324/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2996 - acc: 0.9243Epoch 00323: val_loss improved from 0.54802 to 0.54788, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2995 - acc: 0.9243 - val_loss: 0.5479 - val_acc: 0.8491 Epoch 325/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3044 - acc: 0.9219Epoch 00324: val_loss improved from 0.54788 to 0.54743, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3045 - acc: 0.9213 - val_loss: 0.5474 - val_acc: 0.8479 Epoch 326/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3069 - acc: 0.9181Epoch 00325: val_loss improved from 0.54743 to 0.54719, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3075 - acc: 0.9177 - val_loss: 0.5472 - val_acc: 0.8479 Epoch 327/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.3028 - acc: 0.9250Epoch 00326: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3052 - acc: 0.9238 - val_loss: 0.5474 - val_acc: 0.8479 Epoch 328/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3137 - acc: 0.9180Epoch 00327: val_loss improved from 0.54719 to 0.54644, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3143 - acc: 0.9180 - val_loss: 0.5464 - val_acc: 0.8491 Epoch 329/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2993 - acc: 0.9216Epoch 00328: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2989 - acc: 0.9217 - val_loss: 0.5467 - val_acc: 0.8491 Epoch 330/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3004 - acc: 0.9214Epoch 00329: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3001 - acc: 0.9216 - val_loss: 0.5467 - val_acc: 0.8467 Epoch 331/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3030 - acc: 0.9226Epoch 00330: val_loss improved from 0.54644 to 0.54593, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3030 - acc: 0.9225 - val_loss: 0.5459 - val_acc: 0.8455 Epoch 332/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2997 - acc: 0.9229Epoch 00331: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2995 - acc: 0.9231 - val_loss: 0.5467 - val_acc: 0.8479 Epoch 333/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3054 - acc: 0.9210Epoch 00332: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3051 - acc: 0.9210 - val_loss: 0.5464 - val_acc: 0.8455 Epoch 334/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2967 - acc: 0.9246Epoch 00333: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2967 - acc: 0.9247 - val_loss: 0.5461 - val_acc: 0.8467 Epoch 335/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3014 - acc: 0.9222Epoch 00334: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3011 - acc: 0.9222 - val_loss: 0.5459 - val_acc: 0.8455 Epoch 336/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2921 - acc: 0.9242Epoch 00335: val_loss improved from 0.54593 to 0.54547, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2924 - acc: 0.9243 - val_loss: 0.5455 - val_acc: 0.8455 Epoch 337/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3025 - acc: 0.9196Epoch 00336: val_loss improved from 0.54547 to 0.54533, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3023 - acc: 0.9198 - val_loss: 0.5453 - val_acc: 0.8467 Epoch 338/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2927 - acc: 0.9243Epoch 00337: val_loss improved from 0.54533 to 0.54529, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2924 - acc: 0.9246 - val_loss: 0.5453 - val_acc: 0.8467 Epoch 339/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.3001 - acc: 0.9219Epoch 00338: val_loss improved from 0.54529 to 0.54522, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.3016 - acc: 0.9213 - val_loss: 0.5452 - val_acc: 0.8455 Epoch 340/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2962 - acc: 0.9248Epoch 00339: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2946 - acc: 0.9251 - val_loss: 0.5453 - val_acc: 0.8479 Epoch 341/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3003 - acc: 0.9235Epoch 00340: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.3007 - acc: 0.9234 - val_loss: 0.5452 - val_acc: 0.8479 Epoch 342/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2953 - acc: 0.9270Epoch 00341: val_loss improved from 0.54522 to 0.54499, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2956 - acc: 0.9268 - val_loss: 0.5450 - val_acc: 0.8503 Epoch 343/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.3000 - acc: 0.9178Epoch 00342: val_loss improved from 0.54499 to 0.54493, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2994 - acc: 0.9181 - val_loss: 0.5449 - val_acc: 0.8527 Epoch 344/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2994 - acc: 0.9246- ETA: 1s - loss: 0.304Epoch 00343: val_loss improved from 0.54493 to 0.54424, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2984 - acc: 0.9254 - val_loss: 0.5442 - val_acc: 0.8491 Epoch 345/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2941 - acc: 0.9263Epoch 00344: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2949 - acc: 0.9257 - val_loss: 0.5446 - val_acc: 0.8491 Epoch 346/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2975 - acc: 0.9226Epoch 00345: val_loss improved from 0.54424 to 0.54418, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2978 - acc: 0.9226 - val_loss: 0.5442 - val_acc: 0.8503 Epoch 347/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2922 - acc: 0.9272Epoch 00346: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2898 - acc: 0.9278 - val_loss: 0.5444 - val_acc: 0.8467 Epoch 348/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2939 - acc: 0.9190Epoch 00347: val_loss improved from 0.54418 to 0.54349, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2926 - acc: 0.9195 - val_loss: 0.5435 - val_acc: 0.8467 Epoch 349/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2958 - acc: 0.9259Epoch 00348: val_loss improved from 0.54349 to 0.54347, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2957 - acc: 0.9260 - val_loss: 0.5435 - val_acc: 0.8491 Epoch 350/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2914 - acc: 0.9247Epoch 00349: val_loss improved from 0.54347 to 0.54263, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2913 - acc: 0.9247 - val_loss: 0.5426 - val_acc: 0.8503 Epoch 351/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2912 - acc: 0.9256Epoch 00350: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2932 - acc: 0.9249 - val_loss: 0.5430 - val_acc: 0.8503 Epoch 352/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2893 - acc: 0.9275Epoch 00351: val_loss improved from 0.54263 to 0.54202, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2896 - acc: 0.9275 - val_loss: 0.5420 - val_acc: 0.8479 Epoch 353/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2894 - acc: 0.9225Epoch 00352: val_loss improved from 0.54202 to 0.54158, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2896 - acc: 0.9223 - val_loss: 0.5416 - val_acc: 0.8491 Epoch 354/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2867 - acc: 0.9280Epoch 00353: val_loss improved from 0.54158 to 0.54107, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2869 - acc: 0.9278 - val_loss: 0.5411 - val_acc: 0.8503 Epoch 355/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2933 - acc: 0.9234Epoch 00354: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2928 - acc: 0.9235 - val_loss: 0.5417 - val_acc: 0.8503 Epoch 356/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2889 - acc: 0.9252Epoch 00355: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2882 - acc: 0.9254 - val_loss: 0.5418 - val_acc: 0.8503 Epoch 357/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2864 - acc: 0.9280Epoch 00356: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2859 - acc: 0.9283 - val_loss: 0.5413 - val_acc: 0.8491 Epoch 358/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2884 - acc: 0.9244Epoch 00357: val_loss improved from 0.54107 to 0.54095, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2880 - acc: 0.9247 - val_loss: 0.5409 - val_acc: 0.8479 Epoch 359/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2866 - acc: 0.9253Epoch 00358: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2865 - acc: 0.9254 - val_loss: 0.5414 - val_acc: 0.8467 Epoch 360/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2797 - acc: 0.9325Epoch 00359: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2805 - acc: 0.9323 - val_loss: 0.5411 - val_acc: 0.8479 Epoch 361/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2868 - acc: 0.9267Epoch 00360: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2869 - acc: 0.9268 - val_loss: 0.5412 - val_acc: 0.8491 Epoch 362/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2873 - acc: 0.9248Epoch 00361: val_loss improved from 0.54095 to 0.54083, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2880 - acc: 0.9249 - val_loss: 0.5408 - val_acc: 0.8479 Epoch 363/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2802 - acc: 0.9261Epoch 00362: val_loss improved from 0.54083 to 0.54035, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2800 - acc: 0.9262 - val_loss: 0.5403 - val_acc: 0.8479 Epoch 364/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2819 - acc: 0.9277Epoch 00363: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2816 - acc: 0.9284 - val_loss: 0.5407 - val_acc: 0.8491 Epoch 365/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2826 - acc: 0.9309Epoch 00364: val_loss improved from 0.54035 to 0.54008, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2818 - acc: 0.9307 - val_loss: 0.5401 - val_acc: 0.8479 Epoch 366/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2787 - acc: 0.9317Epoch 00365: val_loss improved from 0.54008 to 0.53971, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2780 - acc: 0.9322 - val_loss: 0.5397 - val_acc: 0.8491 Epoch 367/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2870 - acc: 0.9219Epoch 00366: val_loss improved from 0.53971 to 0.53958, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2848 - acc: 0.9231 - val_loss: 0.5396 - val_acc: 0.8491 Epoch 368/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2800 - acc: 0.9283Epoch 00367: val_loss improved from 0.53958 to 0.53940, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2811 - acc: 0.9278 - val_loss: 0.5394 - val_acc: 0.8491 Epoch 369/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2846 - acc: 0.9298Epoch 00368: val_loss improved from 0.53940 to 0.53914, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2853 - acc: 0.9295 - val_loss: 0.5391 - val_acc: 0.8479 Epoch 370/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2826 - acc: 0.9295Epoch 00369: val_loss improved from 0.53914 to 0.53913, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2809 - acc: 0.9301 - val_loss: 0.5391 - val_acc: 0.8491 Epoch 371/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2783 - acc: 0.9303Epoch 00370: val_loss improved from 0.53913 to 0.53875, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2782 - acc: 0.9307 - val_loss: 0.5387 - val_acc: 0.8503 Epoch 372/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2831 - acc: 0.9241Epoch 00371: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2827 - acc: 0.9241 - val_loss: 0.5391 - val_acc: 0.8491 Epoch 373/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2826 - acc: 0.9280Epoch 00372: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2829 - acc: 0.9278 - val_loss: 0.5393 - val_acc: 0.8503 Epoch 374/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2722 - acc: 0.9308Epoch 00373: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2725 - acc: 0.9310 - val_loss: 0.5394 - val_acc: 0.8479 Epoch 375/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2771 - acc: 0.9287Epoch 00374: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2780 - acc: 0.9281 - val_loss: 0.5388 - val_acc: 0.8467 Epoch 376/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2853 - acc: 0.9282Epoch 00375: val_loss improved from 0.53875 to 0.53821, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2850 - acc: 0.9283 - val_loss: 0.5382 - val_acc: 0.8491 Epoch 377/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2718 - acc: 0.9291Epoch 00376: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2716 - acc: 0.9299 - val_loss: 0.5385 - val_acc: 0.8503 Epoch 378/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2759 - acc: 0.9327Epoch 00377: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2759 - acc: 0.9326 - val_loss: 0.5384 - val_acc: 0.8491 Epoch 379/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2733 - acc: 0.9332Epoch 00378: val_loss improved from 0.53821 to 0.53805, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2747 - acc: 0.9328 - val_loss: 0.5381 - val_acc: 0.8503 Epoch 380/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2679 - acc: 0.9334Epoch 00379: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2680 - acc: 0.9335 - val_loss: 0.5386 - val_acc: 0.8491 Epoch 381/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2739 - acc: 0.9309Epoch 00380: val_loss improved from 0.53805 to 0.53798, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2745 - acc: 0.9305 - val_loss: 0.5380 - val_acc: 0.8491 Epoch 382/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2754 - acc: 0.9313Epoch 00381: val_loss improved from 0.53798 to 0.53702, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2763 - acc: 0.9310 - val_loss: 0.5370 - val_acc: 0.8479 Epoch 383/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2719 - acc: 0.9327Epoch 00382: val_loss improved from 0.53702 to 0.53678, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2698 - acc: 0.9331 - val_loss: 0.5368 - val_acc: 0.8479 Epoch 384/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2730 - acc: 0.9294Epoch 00383: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2734 - acc: 0.9293 - val_loss: 0.5370 - val_acc: 0.8479 Epoch 385/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2707 - acc: 0.9327Epoch 00384: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2701 - acc: 0.9329 - val_loss: 0.5377 - val_acc: 0.8479 Epoch 386/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2776 - acc: 0.9298Epoch 00385: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2775 - acc: 0.9299 - val_loss: 0.5375 - val_acc: 0.8479 Epoch 387/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2723 - acc: 0.9324Epoch 00386: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2717 - acc: 0.9329 - val_loss: 0.5371 - val_acc: 0.8479 Epoch 388/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2701 - acc: 0.9297Epoch 00387: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2690 - acc: 0.9299 - val_loss: 0.5370 - val_acc: 0.8479 Epoch 389/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2736 - acc: 0.9286Epoch 00388: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2744 - acc: 0.9280 - val_loss: 0.5368 - val_acc: 0.8467 Epoch 390/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2757 - acc: 0.9301Epoch 00389: val_loss improved from 0.53678 to 0.53676, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2756 - acc: 0.9301 - val_loss: 0.5368 - val_acc: 0.8491 Epoch 391/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2672 - acc: 0.9307Epoch 00390: val_loss improved from 0.53676 to 0.53622, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2675 - acc: 0.9305 - val_loss: 0.5362 - val_acc: 0.8503 Epoch 392/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2642 - acc: 0.9337Epoch 00391: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2652 - acc: 0.9332 - val_loss: 0.5364 - val_acc: 0.8491 Epoch 393/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2795 - acc: 0.9292Epoch 00392: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2805 - acc: 0.9290 - val_loss: 0.5363 - val_acc: 0.8491 Epoch 394/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2713 - acc: 0.9288Epoch 00393: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2715 - acc: 0.9289 - val_loss: 0.5367 - val_acc: 0.8503 Epoch 395/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2660 - acc: 0.9318Epoch 00394: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2657 - acc: 0.9320 - val_loss: 0.5372 - val_acc: 0.8491 Epoch 396/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2652 - acc: 0.9338Epoch 00395: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2668 - acc: 0.9334 - val_loss: 0.5364 - val_acc: 0.8503 Epoch 397/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2692 - acc: 0.9354Epoch 00396: val_loss improved from 0.53622 to 0.53592, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2690 - acc: 0.9356 - val_loss: 0.5359 - val_acc: 0.8479 Epoch 398/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2666 - acc: 0.9334Epoch 00397: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2667 - acc: 0.9335 - val_loss: 0.5360 - val_acc: 0.8455 Epoch 399/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2655 - acc: 0.9313Epoch 00398: val_loss improved from 0.53592 to 0.53567, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2653 - acc: 0.9316 - val_loss: 0.5357 - val_acc: 0.8455 Epoch 400/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2644 - acc: 0.9360Epoch 00399: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2639 - acc: 0.9361 - val_loss: 0.5357 - val_acc: 0.8467 Epoch 401/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2639 - acc: 0.9318Epoch 00400: val_loss improved from 0.53567 to 0.53543, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2640 - acc: 0.9319 - val_loss: 0.5354 - val_acc: 0.8467 Epoch 402/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2643 - acc: 0.9317Epoch 00401: val_loss improved from 0.53543 to 0.53475, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2654 - acc: 0.9311 - val_loss: 0.5347 - val_acc: 0.8455 Epoch 403/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2561 - acc: 0.9377Epoch 00402: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2567 - acc: 0.9371 - val_loss: 0.5350 - val_acc: 0.8467 Epoch 404/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2576 - acc: 0.9369Epoch 00403: val_loss improved from 0.53475 to 0.53457, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2583 - acc: 0.9367 - val_loss: 0.5346 - val_acc: 0.8491 Epoch 405/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2602 - acc: 0.9354Epoch 00404: val_loss improved from 0.53457 to 0.53421, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2599 - acc: 0.9356 - val_loss: 0.5342 - val_acc: 0.8491 Epoch 406/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2628 - acc: 0.9335Epoch 00405: val_loss improved from 0.53421 to 0.53385, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2624 - acc: 0.9338 - val_loss: 0.5339 - val_acc: 0.8467 Epoch 407/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2581 - acc: 0.9364Epoch 00406: val_loss improved from 0.53385 to 0.53351, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2581 - acc: 0.9364 - val_loss: 0.5335 - val_acc: 0.8503 Epoch 408/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2567 - acc: 0.9383Epoch 00407: val_loss improved from 0.53351 to 0.53350, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2572 - acc: 0.9380 - val_loss: 0.5335 - val_acc: 0.8479 Epoch 409/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2601 - acc: 0.9331Epoch 00408: val_loss improved from 0.53350 to 0.53344, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2604 - acc: 0.9331 - val_loss: 0.5334 - val_acc: 0.8467 Epoch 410/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2578 - acc: 0.9343Epoch 00409: val_loss improved from 0.53344 to 0.53304, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2575 - acc: 0.9344 - val_loss: 0.5330 - val_acc: 0.8479 Epoch 411/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2595 - acc: 0.9340Epoch 00410: val_loss improved from 0.53304 to 0.53301, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2592 - acc: 0.9341 - val_loss: 0.5330 - val_acc: 0.8491 Epoch 412/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2585 - acc: 0.9361Epoch 00411: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2585 - acc: 0.9362 - val_loss: 0.5334 - val_acc: 0.8479 Epoch 413/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2615 - acc: 0.9336Epoch 00412: val_loss improved from 0.53301 to 0.53290, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2612 - acc: 0.9346 - val_loss: 0.5329 - val_acc: 0.8503 Epoch 414/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2547 - acc: 0.9383Epoch 00413: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2544 - acc: 0.9379 - val_loss: 0.5332 - val_acc: 0.8491 Epoch 415/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2577 - acc: 0.9347Epoch 00414: val_loss improved from 0.53290 to 0.53278, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2579 - acc: 0.9349 - val_loss: 0.5328 - val_acc: 0.8491 Epoch 416/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2634 - acc: 0.9305Epoch 00415: val_loss improved from 0.53278 to 0.53225, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2630 - acc: 0.9307 - val_loss: 0.5322 - val_acc: 0.8503 Epoch 417/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2639 - acc: 0.9325Epoch 00416: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2634 - acc: 0.9326 - val_loss: 0.5330 - val_acc: 0.8503 Epoch 418/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2525 - acc: 0.9393Epoch 00417: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2526 - acc: 0.9394 - val_loss: 0.5330 - val_acc: 0.8479 Epoch 419/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2590 - acc: 0.9335Epoch 00418: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2600 - acc: 0.9332 - val_loss: 0.5331 - val_acc: 0.8491 Epoch 420/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2584 - acc: 0.9343Epoch 00419: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2575 - acc: 0.9347 - val_loss: 0.5335 - val_acc: 0.8491 Epoch 421/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2558 - acc: 0.9397Epoch 00420: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2550 - acc: 0.9397 - val_loss: 0.5331 - val_acc: 0.8491 Epoch 422/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2568 - acc: 0.9349Epoch 00421: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2566 - acc: 0.9352 - val_loss: 0.5330 - val_acc: 0.8479 Epoch 423/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2549 - acc: 0.9380Epoch 00422: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2548 - acc: 0.9379 - val_loss: 0.5330 - val_acc: 0.8503 Epoch 424/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2575 - acc: 0.9354Epoch 00423: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2575 - acc: 0.9353 - val_loss: 0.5323 - val_acc: 0.8479 Epoch 425/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2575 - acc: 0.9318Epoch 00424: val_loss improved from 0.53225 to 0.53190, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2575 - acc: 0.9317 - val_loss: 0.5319 - val_acc: 0.8479 Epoch 426/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2479 - acc: 0.9380Epoch 00425: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2481 - acc: 0.9377 - val_loss: 0.5320 - val_acc: 0.8479 Epoch 427/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2488 - acc: 0.9393Epoch 00426: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2487 - acc: 0.9392 - val_loss: 0.5320 - val_acc: 0.8491 Epoch 428/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2568 - acc: 0.9341Epoch 00427: val_loss improved from 0.53190 to 0.53145, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2568 - acc: 0.9340 - val_loss: 0.5315 - val_acc: 0.8503 Epoch 429/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2542 - acc: 0.9392Epoch 00428: val_loss improved from 0.53145 to 0.53136, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2540 - acc: 0.9394 - val_loss: 0.5314 - val_acc: 0.8515 Epoch 430/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2497 - acc: 0.9401Epoch 00429: val_loss improved from 0.53136 to 0.53089, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2481 - acc: 0.9401 - val_loss: 0.5309 - val_acc: 0.8515 Epoch 431/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2494 - acc: 0.9390Epoch 00430: val_loss improved from 0.53089 to 0.53013, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2496 - acc: 0.9388 - val_loss: 0.5301 - val_acc: 0.8515 Epoch 432/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2457 - acc: 0.9375Epoch 00431: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2458 - acc: 0.9371 - val_loss: 0.5304 - val_acc: 0.8527 Epoch 433/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2531 - acc: 0.9363Epoch 00432: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2530 - acc: 0.9364 - val_loss: 0.5308 - val_acc: 0.8503 Epoch 434/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2534 - acc: 0.9328Epoch 00433: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2531 - acc: 0.9329 - val_loss: 0.5303 - val_acc: 0.8515 Epoch 435/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2484 - acc: 0.9363Epoch 00434: val_loss improved from 0.53013 to 0.52970, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2483 - acc: 0.9365 - val_loss: 0.5297 - val_acc: 0.8515 Epoch 436/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2435 - acc: 0.9425Epoch 00435: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2433 - acc: 0.9427 - val_loss: 0.5298 - val_acc: 0.8515 Epoch 437/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2352 - acc: 0.9436Epoch 00436: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2344 - acc: 0.9440 - val_loss: 0.5301 - val_acc: 0.8503 Epoch 438/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2420 - acc: 0.9429Epoch 00437: val_loss improved from 0.52970 to 0.52961, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2418 - acc: 0.9431 - val_loss: 0.5296 - val_acc: 0.8515 Epoch 439/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2421 - acc: 0.9378Epoch 00438: val_loss improved from 0.52961 to 0.52916, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2420 - acc: 0.9379 - val_loss: 0.5292 - val_acc: 0.8515 Epoch 440/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2488 - acc: 0.9358Epoch 00439: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2492 - acc: 0.9355 - val_loss: 0.5295 - val_acc: 0.8527 Epoch 441/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2521 - acc: 0.9378Epoch 00440: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2520 - acc: 0.9377 - val_loss: 0.5303 - val_acc: 0.8503 Epoch 442/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2422 - acc: 0.9392Epoch 00441: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2422 - acc: 0.9391 - val_loss: 0.5305 - val_acc: 0.8491 Epoch 443/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2419 - acc: 0.9387Epoch 00442: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2445 - acc: 0.9373 - val_loss: 0.5298 - val_acc: 0.8491 Epoch 444/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2418 - acc: 0.9429Epoch 00443: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2418 - acc: 0.9427 - val_loss: 0.5297 - val_acc: 0.8503 Epoch 445/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2514 - acc: 0.9367Epoch 00444: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2510 - acc: 0.9374 - val_loss: 0.5296 - val_acc: 0.8503 Epoch 446/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2415 - acc: 0.9393Epoch 00445: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2417 - acc: 0.9386 - val_loss: 0.5298 - val_acc: 0.8503 Epoch 447/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2448 - acc: 0.9395Epoch 00446: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2421 - acc: 0.9407 - val_loss: 0.5299 - val_acc: 0.8467 Epoch 448/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2537 - acc: 0.9341Epoch 00447: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2552 - acc: 0.9332 - val_loss: 0.5295 - val_acc: 0.8491 Epoch 449/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2356 - acc: 0.9434Epoch 00448: val_loss improved from 0.52916 to 0.52903, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2365 - acc: 0.9424 - val_loss: 0.5290 - val_acc: 0.8503 Epoch 450/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2486 - acc: 0.9352Epoch 00449: val_loss improved from 0.52903 to 0.52851, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2478 - acc: 0.9356 - val_loss: 0.5285 - val_acc: 0.8515 Epoch 451/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2436 - acc: 0.9410Epoch 00450: val_loss improved from 0.52851 to 0.52841, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2438 - acc: 0.9409 - val_loss: 0.5284 - val_acc: 0.8491 Epoch 452/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2373 - acc: 0.9408Epoch 00451: val_loss improved from 0.52841 to 0.52800, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2375 - acc: 0.9409 - val_loss: 0.5280 - val_acc: 0.8491 Epoch 453/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2411 - acc: 0.9404Epoch 00452: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2406 - acc: 0.9404 - val_loss: 0.5281 - val_acc: 0.8503 Epoch 454/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2452 - acc: 0.9425Epoch 00453: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2423 - acc: 0.9433 - val_loss: 0.5284 - val_acc: 0.8503 Epoch 455/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2441 - acc: 0.9416Epoch 00454: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2442 - acc: 0.9415 - val_loss: 0.5280 - val_acc: 0.8491 Epoch 456/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2427 - acc: 0.9422Epoch 00455: val_loss improved from 0.52800 to 0.52780, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2427 - acc: 0.9422 - val_loss: 0.5278 - val_acc: 0.8479 Epoch 457/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2359 - acc: 0.9437Epoch 00456: val_loss improved from 0.52780 to 0.52683, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2362 - acc: 0.9436 - val_loss: 0.5268 - val_acc: 0.8491 Epoch 458/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2455 - acc: 0.9389Epoch 00457: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2451 - acc: 0.9391 - val_loss: 0.5272 - val_acc: 0.8491 Epoch 459/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2393 - acc: 0.9435Epoch 00458: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2389 - acc: 0.9436 - val_loss: 0.5269 - val_acc: 0.8491 Epoch 460/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2398 - acc: 0.9398Epoch 00459: val_loss improved from 0.52683 to 0.52654, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2395 - acc: 0.9398 - val_loss: 0.5265 - val_acc: 0.8515 Epoch 461/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2391 - acc: 0.9441Epoch 00460: val_loss improved from 0.52654 to 0.52633, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2387 - acc: 0.9443 - val_loss: 0.5263 - val_acc: 0.8515 Epoch 462/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2431 - acc: 0.9363Epoch 00461: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2430 - acc: 0.9365 - val_loss: 0.5264 - val_acc: 0.8539 Epoch 463/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2384 - acc: 0.9395Epoch 00462: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2383 - acc: 0.9394 - val_loss: 0.5269 - val_acc: 0.8527 Epoch 464/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2389 - acc: 0.9393Epoch 00463: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2373 - acc: 0.9403 - val_loss: 0.5270 - val_acc: 0.8527 Epoch 465/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2437 - acc: 0.9380Epoch 00464: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2433 - acc: 0.9382 - val_loss: 0.5275 - val_acc: 0.8515 Epoch 466/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2362 - acc: 0.9416Epoch 00465: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2375 - acc: 0.9407 - val_loss: 0.5277 - val_acc: 0.8515 Epoch 467/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2364 - acc: 0.9404Epoch 00466: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2369 - acc: 0.9398 - val_loss: 0.5278 - val_acc: 0.8515 Epoch 468/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2386 - acc: 0.9364Epoch 00467: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2384 - acc: 0.9365 - val_loss: 0.5278 - val_acc: 0.8539 Epoch 469/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2348 - acc: 0.9450Epoch 00468: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2349 - acc: 0.9448 - val_loss: 0.5278 - val_acc: 0.8539 Epoch 470/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2460 - acc: 0.9380- ETA: 0s - loss: 0.2458 - accEpoch 00469: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2471 - acc: 0.9382 - val_loss: 0.5277 - val_acc: 0.8527 Epoch 471/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2325 - acc: 0.9455- ETA: 0s - loss: 0.2282 - acc:Epoch 00470: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2323 - acc: 0.9457 - val_loss: 0.5270 - val_acc: 0.8515 Epoch 472/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2422 - acc: 0.9407Epoch 00471: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2409 - acc: 0.9415 - val_loss: 0.5268 - val_acc: 0.8503 Epoch 473/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2330 - acc: 0.9450Epoch 00472: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2327 - acc: 0.9451 - val_loss: 0.5270 - val_acc: 0.8503 Epoch 474/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2339 - acc: 0.9429Epoch 00473: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2330 - acc: 0.9433 - val_loss: 0.5269 - val_acc: 0.8503 Epoch 475/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2425 - acc: 0.9386Epoch 00474: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2422 - acc: 0.9386 - val_loss: 0.5272 - val_acc: 0.8503 Epoch 476/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2286 - acc: 0.9453Epoch 00475: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2284 - acc: 0.9454 - val_loss: 0.5275 - val_acc: 0.8491 Epoch 477/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2287 - acc: 0.9425Epoch 00476: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2292 - acc: 0.9424 - val_loss: 0.5266 - val_acc: 0.8503 Epoch 478/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2341 - acc: 0.9404Epoch 00477: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2343 - acc: 0.9403 - val_loss: 0.5265 - val_acc: 0.8503 Epoch 479/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2305 - acc: 0.9432Epoch 00478: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2302 - acc: 0.9433 - val_loss: 0.5264 - val_acc: 0.8479 Epoch 480/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2329 - acc: 0.9435Epoch 00479: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2325 - acc: 0.9437 - val_loss: 0.5268 - val_acc: 0.8503 Epoch 481/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2288 - acc: 0.9429Epoch 00480: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2288 - acc: 0.9428 - val_loss: 0.5265 - val_acc: 0.8515 Epoch 482/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2312 - acc: 0.9435Epoch 00481: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2317 - acc: 0.9433 - val_loss: 0.5265 - val_acc: 0.8503 Epoch 483/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2321 - acc: 0.9435Epoch 00482: val_loss improved from 0.52633 to 0.52623, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2321 - acc: 0.9436 - val_loss: 0.5262 - val_acc: 0.8515 Epoch 484/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2344 - acc: 0.9418Epoch 00483: val_loss improved from 0.52623 to 0.52572, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2345 - acc: 0.9413 - val_loss: 0.5257 - val_acc: 0.8491 Epoch 485/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2395 - acc: 0.9425Epoch 00484: val_loss improved from 0.52572 to 0.52529, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2392 - acc: 0.9425 - val_loss: 0.5253 - val_acc: 0.8515 Epoch 486/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2256 - acc: 0.9450Epoch 00485: val_loss improved from 0.52529 to 0.52505, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2262 - acc: 0.9449 - val_loss: 0.5251 - val_acc: 0.8527 Epoch 487/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2318 - acc: 0.9413Epoch 00486: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2322 - acc: 0.9412 - val_loss: 0.5252 - val_acc: 0.8515 Epoch 488/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2291 - acc: 0.9450Epoch 00487: val_loss improved from 0.52505 to 0.52463, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2293 - acc: 0.9448 - val_loss: 0.5246 - val_acc: 0.8515 Epoch 489/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2237 - acc: 0.9481Epoch 00488: val_loss improved from 0.52463 to 0.52437, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2229 - acc: 0.9485 - val_loss: 0.5244 - val_acc: 0.8515 Epoch 490/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2367 - acc: 0.9369Epoch 00489: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2356 - acc: 0.9377 - val_loss: 0.5248 - val_acc: 0.8515 Epoch 491/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2237 - acc: 0.9465Epoch 00490: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2239 - acc: 0.9466 - val_loss: 0.5245 - val_acc: 0.8503 Epoch 492/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2279 - acc: 0.9453Epoch 00491: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2277 - acc: 0.9454 - val_loss: 0.5247 - val_acc: 0.8515 Epoch 493/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2269 - acc: 0.9404Epoch 00492: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2264 - acc: 0.9406 - val_loss: 0.5254 - val_acc: 0.8515 Epoch 494/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2324 - acc: 0.9400Epoch 00493: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2315 - acc: 0.9407 - val_loss: 0.5252 - val_acc: 0.8503 Epoch 495/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2186 - acc: 0.9475Epoch 00494: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2185 - acc: 0.9470 - val_loss: 0.5251 - val_acc: 0.8515 Epoch 496/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2261 - acc: 0.9432Epoch 00495: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2257 - acc: 0.9433 - val_loss: 0.5249 - val_acc: 0.8503 Epoch 497/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2246 - acc: 0.9462Epoch 00496: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2243 - acc: 0.9464 - val_loss: 0.5249 - val_acc: 0.8515 Epoch 498/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2221 - acc: 0.9458Epoch 00497: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2240 - acc: 0.9452 - val_loss: 0.5254 - val_acc: 0.8527 Epoch 499/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2293 - acc: 0.9405Epoch 00498: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2290 - acc: 0.9406 - val_loss: 0.5256 - val_acc: 0.8527 Epoch 500/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2267 - acc: 0.9446Epoch 00499: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2267 - acc: 0.9446 - val_loss: 0.5251 - val_acc: 0.8515 Epoch 501/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2244 - acc: 0.9486Epoch 00500: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2249 - acc: 0.9485 - val_loss: 0.5253 - val_acc: 0.8515 Epoch 502/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2312 - acc: 0.9396- ETA: 0s - loss: 0.2330 - accEpoch 00501: val_loss improved from 0.52437 to 0.52411, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2308 - acc: 0.9397 - val_loss: 0.5241 - val_acc: 0.8539 Epoch 503/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2321 - acc: 0.9432Epoch 00502: val_loss improved from 0.52411 to 0.52390, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2321 - acc: 0.9433 - val_loss: 0.5239 - val_acc: 0.8515 Epoch 504/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2300 - acc: 0.9398Epoch 00503: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2297 - acc: 0.9398 - val_loss: 0.5241 - val_acc: 0.8527 Epoch 505/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2184 - acc: 0.9494Epoch 00504: val_loss improved from 0.52390 to 0.52370, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2201 - acc: 0.9490 - val_loss: 0.5237 - val_acc: 0.8527 Epoch 506/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2206 - acc: 0.9437Epoch 00505: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2206 - acc: 0.9434 - val_loss: 0.5241 - val_acc: 0.8539 Epoch 507/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2223 - acc: 0.9458Epoch 00506: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2231 - acc: 0.9458 - val_loss: 0.5241 - val_acc: 0.8515 Epoch 508/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2241 - acc: 0.9472Epoch 00507: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2246 - acc: 0.9472 - val_loss: 0.5238 - val_acc: 0.8527 Epoch 509/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2226 - acc: 0.9464Epoch 00508: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2222 - acc: 0.9466 - val_loss: 0.5238 - val_acc: 0.8515 Epoch 510/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2192 - acc: 0.9470Epoch 00509: val_loss improved from 0.52370 to 0.52331, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2191 - acc: 0.9472 - val_loss: 0.5233 - val_acc: 0.8515 Epoch 511/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2201 - acc: 0.9464Epoch 00510: val_loss improved from 0.52331 to 0.52304, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2195 - acc: 0.9466 - val_loss: 0.5230 - val_acc: 0.8515 Epoch 512/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2263 - acc: 0.9418Epoch 00511: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2267 - acc: 0.9421 - val_loss: 0.5232 - val_acc: 0.8539 Epoch 513/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2214 - acc: 0.9458Epoch 00512: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2211 - acc: 0.9461 - val_loss: 0.5233 - val_acc: 0.8515 Epoch 514/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2274 - acc: 0.9456Epoch 00513: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2280 - acc: 0.9451 - val_loss: 0.5232 - val_acc: 0.8515 Epoch 515/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2183 - acc: 0.9467Epoch 00514: val_loss improved from 0.52304 to 0.52293, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2178 - acc: 0.9469 - val_loss: 0.5229 - val_acc: 0.8527 Epoch 516/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2175 - acc: 0.9481Epoch 00515: val_loss improved from 0.52293 to 0.52265, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2179 - acc: 0.9476 - val_loss: 0.5227 - val_acc: 0.8515 Epoch 517/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2180 - acc: 0.9480Epoch 00516: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2178 - acc: 0.9479 - val_loss: 0.5229 - val_acc: 0.8527 Epoch 518/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2144 - acc: 0.9486Epoch 00517: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2146 - acc: 0.9485 - val_loss: 0.5231 - val_acc: 0.8527 Epoch 519/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2198 - acc: 0.9455Epoch 00518: val_loss improved from 0.52265 to 0.52260, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2200 - acc: 0.9454 - val_loss: 0.5226 - val_acc: 0.8527 Epoch 520/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2145 - acc: 0.9501Epoch 00519: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2143 - acc: 0.9503 - val_loss: 0.5228 - val_acc: 0.8515 Epoch 521/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2183 - acc: 0.9467Epoch 00520: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2176 - acc: 0.9472 - val_loss: 0.5227 - val_acc: 0.8515 Epoch 522/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2142 - acc: 0.9491Epoch 00521: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2151 - acc: 0.9488 - val_loss: 0.5228 - val_acc: 0.8539 Epoch 523/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2228 - acc: 0.9441Epoch 00522: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2216 - acc: 0.9446 - val_loss: 0.5229 - val_acc: 0.8539 Epoch 524/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2302 - acc: 0.9398Epoch 00523: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2299 - acc: 0.9404 - val_loss: 0.5226 - val_acc: 0.8515 Epoch 525/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2138 - acc: 0.9512Epoch 00524: val_loss improved from 0.52260 to 0.52231, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2136 - acc: 0.9512 - val_loss: 0.5223 - val_acc: 0.8539 Epoch 526/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2133 - acc: 0.9486Epoch 00525: val_loss improved from 0.52231 to 0.52217, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2129 - acc: 0.9488 - val_loss: 0.5222 - val_acc: 0.8527 Epoch 527/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2106 - acc: 0.9487Epoch 00526: val_loss improved from 0.52217 to 0.52204, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2116 - acc: 0.9487 - val_loss: 0.5220 - val_acc: 0.8539 Epoch 528/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2169 - acc: 0.9489Epoch 00527: val_loss improved from 0.52204 to 0.52157, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2180 - acc: 0.9484 - val_loss: 0.5216 - val_acc: 0.8539 Epoch 529/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2161 - acc: 0.9486Epoch 00528: val_loss improved from 0.52157 to 0.52125, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2160 - acc: 0.9485 - val_loss: 0.5213 - val_acc: 0.8515 Epoch 530/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2146 - acc: 0.9474Epoch 00529: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2147 - acc: 0.9475 - val_loss: 0.5215 - val_acc: 0.8515 Epoch 531/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2216 - acc: 0.9458Epoch 00530: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2217 - acc: 0.9457 - val_loss: 0.5215 - val_acc: 0.8539 Epoch 532/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2184 - acc: 0.9467Epoch 00531: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2185 - acc: 0.9461 - val_loss: 0.5216 - val_acc: 0.8527 Epoch 533/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2127 - acc: 0.9497Epoch 00532: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2130 - acc: 0.9497 - val_loss: 0.5216 - val_acc: 0.8515 Epoch 534/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2192 - acc: 0.9473Epoch 00533: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2191 - acc: 0.9472 - val_loss: 0.5214 - val_acc: 0.8515 Epoch 535/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2174 - acc: 0.9494Epoch 00534: val_loss improved from 0.52125 to 0.52100, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2176 - acc: 0.9494 - val_loss: 0.5210 - val_acc: 0.8515 Epoch 536/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2095 - acc: 0.9501Epoch 00535: val_loss improved from 0.52100 to 0.52082, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2093 - acc: 0.9501 - val_loss: 0.5208 - val_acc: 0.8515 Epoch 537/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2119 - acc: 0.9489Epoch 00536: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2119 - acc: 0.9490 - val_loss: 0.5210 - val_acc: 0.8515 Epoch 538/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2101 - acc: 0.9489Epoch 00537: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2101 - acc: 0.9490 - val_loss: 0.5216 - val_acc: 0.8503 Epoch 539/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2193 - acc: 0.9445Epoch 00538: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2182 - acc: 0.9445 - val_loss: 0.5214 - val_acc: 0.8515 Epoch 540/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2130 - acc: 0.9516Epoch 00539: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2134 - acc: 0.9513 - val_loss: 0.5210 - val_acc: 0.8515 Epoch 541/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2146 - acc: 0.9500Epoch 00540: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2145 - acc: 0.9499 - val_loss: 0.5213 - val_acc: 0.8503 Epoch 542/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2076 - acc: 0.9503Epoch 00541: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2078 - acc: 0.9501 - val_loss: 0.5209 - val_acc: 0.8503 Epoch 543/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2152 - acc: 0.9489Epoch 00542: val_loss improved from 0.52082 to 0.52060, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2149 - acc: 0.9490 - val_loss: 0.5206 - val_acc: 0.8515 Epoch 544/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2071 - acc: 0.9537Epoch 00543: val_loss improved from 0.52060 to 0.52055, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2089 - acc: 0.9528 - val_loss: 0.5206 - val_acc: 0.8527 Epoch 545/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2194 - acc: 0.9493Epoch 00544: val_loss improved from 0.52055 to 0.52012, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2186 - acc: 0.9491 - val_loss: 0.5201 - val_acc: 0.8527 Epoch 546/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2141 - acc: 0.9479Epoch 00545: val_loss improved from 0.52012 to 0.52005, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2139 - acc: 0.9481 - val_loss: 0.5200 - val_acc: 0.8527 Epoch 547/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2110 - acc: 0.9446Epoch 00546: val_loss improved from 0.52005 to 0.51948, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2112 - acc: 0.9445 - val_loss: 0.5195 - val_acc: 0.8527 Epoch 548/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2066 - acc: 0.9518Epoch 00547: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2065 - acc: 0.9519 - val_loss: 0.5196 - val_acc: 0.8527 Epoch 549/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2067 - acc: 0.9547Epoch 00548: val_loss improved from 0.51948 to 0.51925, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2059 - acc: 0.9545 - val_loss: 0.5192 - val_acc: 0.8539 Epoch 550/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2155 - acc: 0.9455Epoch 00549: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2152 - acc: 0.9457 - val_loss: 0.5193 - val_acc: 0.8503 Epoch 551/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2051 - acc: 0.9519Epoch 00550: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2050 - acc: 0.9518 - val_loss: 0.5195 - val_acc: 0.8503 Epoch 552/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2082 - acc: 0.9482Epoch 00551: val_loss improved from 0.51925 to 0.51923, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2074 - acc: 0.9487 - val_loss: 0.5192 - val_acc: 0.8539 Epoch 553/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2095 - acc: 0.9497Epoch 00552: val_loss improved from 0.51923 to 0.51874, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2094 - acc: 0.9499 - val_loss: 0.5187 - val_acc: 0.8503 Epoch 554/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2154 - acc: 0.9466Epoch 00553: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2144 - acc: 0.9469 - val_loss: 0.5189 - val_acc: 0.8539 Epoch 555/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2095 - acc: 0.9492Epoch 00554: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2100 - acc: 0.9490 - val_loss: 0.5191 - val_acc: 0.8503 Epoch 556/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2100 - acc: 0.9485Epoch 00555: val_loss improved from 0.51874 to 0.51857, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2098 - acc: 0.9484 - val_loss: 0.5186 - val_acc: 0.8515 Epoch 557/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2106 - acc: 0.9507Epoch 00556: val_loss improved from 0.51857 to 0.51838, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2105 - acc: 0.9507 - val_loss: 0.5184 - val_acc: 0.8539 Epoch 558/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2063 - acc: 0.9507Epoch 00557: val_loss improved from 0.51838 to 0.51814, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2061 - acc: 0.9509 - val_loss: 0.5181 - val_acc: 0.8515 Epoch 559/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2082 - acc: 0.9495Epoch 00558: val_loss improved from 0.51814 to 0.51806, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2082 - acc: 0.9496 - val_loss: 0.5181 - val_acc: 0.8503 Epoch 560/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2094 - acc: 0.9479Epoch 00559: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2102 - acc: 0.9475 - val_loss: 0.5182 - val_acc: 0.8539 Epoch 561/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2027 - acc: 0.9554Epoch 00560: val_loss improved from 0.51806 to 0.51777, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2024 - acc: 0.9555 - val_loss: 0.5178 - val_acc: 0.8539 Epoch 562/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2038 - acc: 0.9534Epoch 00561: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2035 - acc: 0.9536 - val_loss: 0.5178 - val_acc: 0.8527 Epoch 563/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1999 - acc: 0.9520Epoch 00562: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2007 - acc: 0.9516 - val_loss: 0.5179 - val_acc: 0.8515 Epoch 564/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2059 - acc: 0.9507Epoch 00563: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2057 - acc: 0.9509 - val_loss: 0.5184 - val_acc: 0.8527 Epoch 565/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2017 - acc: 0.9525Epoch 00564: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2018 - acc: 0.9525 - val_loss: 0.5183 - val_acc: 0.8527 Epoch 566/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2050 - acc: 0.9486Epoch 00565: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2050 - acc: 0.9487 - val_loss: 0.5181 - val_acc: 0.8515 Epoch 567/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1987 - acc: 0.9521Epoch 00566: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1984 - acc: 0.9521 - val_loss: 0.5185 - val_acc: 0.8539 Epoch 568/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2089 - acc: 0.9512Epoch 00567: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2084 - acc: 0.9513 - val_loss: 0.5183 - val_acc: 0.8527 Epoch 569/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1996 - acc: 0.9554Epoch 00568: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1992 - acc: 0.9555 - val_loss: 0.5184 - val_acc: 0.8527 Epoch 570/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2026 - acc: 0.9507Epoch 00569: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2021 - acc: 0.9510 - val_loss: 0.5181 - val_acc: 0.8527 Epoch 571/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1975 - acc: 0.9551Epoch 00570: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1972 - acc: 0.9552 - val_loss: 0.5180 - val_acc: 0.8515 Epoch 572/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2049 - acc: 0.9478Epoch 00571: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2050 - acc: 0.9478 - val_loss: 0.5181 - val_acc: 0.8491 Epoch 573/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2009 - acc: 0.9525Epoch 00572: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2025 - acc: 0.9525 - val_loss: 0.5184 - val_acc: 0.8491 Epoch 574/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1990 - acc: 0.9559Epoch 00573: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1995 - acc: 0.9554 - val_loss: 0.5183 - val_acc: 0.8515 Epoch 575/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1979 - acc: 0.9555Epoch 00574: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1974 - acc: 0.9557 - val_loss: 0.5183 - val_acc: 0.8515 Epoch 576/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2058 - acc: 0.9519Epoch 00575: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2049 - acc: 0.9522 - val_loss: 0.5178 - val_acc: 0.8527 Epoch 577/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1975 - acc: 0.9547Epoch 00576: val_loss improved from 0.51777 to 0.51768, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1967 - acc: 0.9549 - val_loss: 0.5177 - val_acc: 0.8515 Epoch 578/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.2007 - acc: 0.9527Epoch 00577: val_loss improved from 0.51768 to 0.51720, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2004 - acc: 0.9528 - val_loss: 0.5172 - val_acc: 0.8515 Epoch 579/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2021 - acc: 0.9537Epoch 00578: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2026 - acc: 0.9537 - val_loss: 0.5177 - val_acc: 0.8503 Epoch 580/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2043 - acc: 0.9525Epoch 00579: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2037 - acc: 0.9527 - val_loss: 0.5179 - val_acc: 0.8527 Epoch 581/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2013 - acc: 0.9522Epoch 00580: val_loss improved from 0.51720 to 0.51710, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2015 - acc: 0.9522 - val_loss: 0.5171 - val_acc: 0.8515 Epoch 582/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2016 - acc: 0.9486Epoch 00581: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2014 - acc: 0.9488 - val_loss: 0.5173 - val_acc: 0.8515 Epoch 583/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2006 - acc: 0.9540Epoch 00582: val_loss improved from 0.51710 to 0.51700, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1992 - acc: 0.9546 - val_loss: 0.5170 - val_acc: 0.8515 Epoch 584/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2015 - acc: 0.9549Epoch 00583: val_loss improved from 0.51700 to 0.51699, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.2015 - acc: 0.9549 - val_loss: 0.5170 - val_acc: 0.8527 Epoch 585/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1960 - acc: 0.9534Epoch 00584: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1960 - acc: 0.9536 - val_loss: 0.5171 - val_acc: 0.8527 Epoch 586/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1990 - acc: 0.9497Epoch 00585: val_loss improved from 0.51699 to 0.51670, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1990 - acc: 0.9497 - val_loss: 0.5167 - val_acc: 0.8515 Epoch 587/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1969 - acc: 0.9550Epoch 00586: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1961 - acc: 0.9554 - val_loss: 0.5171 - val_acc: 0.8491 Epoch 588/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.2033 - acc: 0.9533Epoch 00587: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2031 - acc: 0.9533 - val_loss: 0.5175 - val_acc: 0.8503 Epoch 589/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2015 - acc: 0.9545Epoch 00588: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2011 - acc: 0.9548 - val_loss: 0.5173 - val_acc: 0.8527 Epoch 590/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1994 - acc: 0.9539Epoch 00589: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1989 - acc: 0.9540 - val_loss: 0.5177 - val_acc: 0.8515 Epoch 591/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.2039 - acc: 0.9524Epoch 00590: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.2039 - acc: 0.9525 - val_loss: 0.5180 - val_acc: 0.8527 Epoch 592/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1988 - acc: 0.9539Epoch 00591: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1994 - acc: 0.9536 - val_loss: 0.5177 - val_acc: 0.8515 Epoch 593/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1959 - acc: 0.9522Epoch 00592: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1959 - acc: 0.9522 - val_loss: 0.5177 - val_acc: 0.8527 Epoch 594/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1955 - acc: 0.9545Epoch 00593: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1957 - acc: 0.9545 - val_loss: 0.5174 - val_acc: 0.8515 Epoch 595/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1919 - acc: 0.9540Epoch 00594: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1918 - acc: 0.9542 - val_loss: 0.5172 - val_acc: 0.8503 Epoch 596/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1948 - acc: 0.9560Epoch 00595: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1935 - acc: 0.9561 - val_loss: 0.5169 - val_acc: 0.8491 Epoch 597/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1963 - acc: 0.9530Epoch 00596: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1937 - acc: 0.9545 - val_loss: 0.5169 - val_acc: 0.8491 Epoch 598/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1985 - acc: 0.9539Epoch 00597: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1966 - acc: 0.9545 - val_loss: 0.5172 - val_acc: 0.8503 Epoch 599/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1912 - acc: 0.9556Epoch 00598: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1918 - acc: 0.9554 - val_loss: 0.5174 - val_acc: 0.8527 Epoch 600/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1964 - acc: 0.9536Epoch 00599: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1959 - acc: 0.9537 - val_loss: 0.5171 - val_acc: 0.8527 Epoch 601/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1930 - acc: 0.9549Epoch 00600: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1933 - acc: 0.9548 - val_loss: 0.5171 - val_acc: 0.8527 Epoch 602/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1988 - acc: 0.9537Epoch 00601: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1986 - acc: 0.9539 - val_loss: 0.5171 - val_acc: 0.8503 Epoch 603/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1935 - acc: 0.9524Epoch 00602: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1936 - acc: 0.9524 - val_loss: 0.5168 - val_acc: 0.8491 Epoch 604/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1890 - acc: 0.9575Epoch 00603: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1892 - acc: 0.9573 - val_loss: 0.5169 - val_acc: 0.8503 Epoch 605/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1914 - acc: 0.9558Epoch 00604: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1916 - acc: 0.9558 - val_loss: 0.5168 - val_acc: 0.8503 Epoch 606/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1933 - acc: 0.9551Epoch 00605: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1935 - acc: 0.9549 - val_loss: 0.5168 - val_acc: 0.8503 Epoch 607/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1937 - acc: 0.9545Epoch 00606: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1934 - acc: 0.9551 - val_loss: 0.5168 - val_acc: 0.8479 Epoch 608/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1909 - acc: 0.9573Epoch 00607: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1914 - acc: 0.9573 - val_loss: 0.5170 - val_acc: 0.8515 Epoch 609/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1888 - acc: 0.9593Epoch 00608: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1886 - acc: 0.9594 - val_loss: 0.5168 - val_acc: 0.8491 Epoch 610/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1950 - acc: 0.9542Epoch 00609: val_loss improved from 0.51670 to 0.51613, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1943 - acc: 0.9543 - val_loss: 0.5161 - val_acc: 0.8515 Epoch 611/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1922 - acc: 0.9555Epoch 00610: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1923 - acc: 0.9555 - val_loss: 0.5163 - val_acc: 0.8515 Epoch 612/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1934 - acc: 0.9542Epoch 00611: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1933 - acc: 0.9542 - val_loss: 0.5164 - val_acc: 0.8527 Epoch 613/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1992 - acc: 0.9521Epoch 00612: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1992 - acc: 0.9521 - val_loss: 0.5163 - val_acc: 0.8515 Epoch 614/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1892 - acc: 0.9566Epoch 00613: val_loss improved from 0.51613 to 0.51584, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1885 - acc: 0.9570 - val_loss: 0.5158 - val_acc: 0.8479 Epoch 615/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1966 - acc: 0.9513Epoch 00614: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1970 - acc: 0.9512 - val_loss: 0.5162 - val_acc: 0.8479 Epoch 616/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1918 - acc: 0.9554Epoch 00615: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1913 - acc: 0.9555 - val_loss: 0.5166 - val_acc: 0.8467 Epoch 617/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1883 - acc: 0.9552Epoch 00616: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1881 - acc: 0.9554 - val_loss: 0.5164 - val_acc: 0.8491 Epoch 618/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1973 - acc: 0.9519Epoch 00617: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1976 - acc: 0.9516 - val_loss: 0.5161 - val_acc: 0.8515 Epoch 619/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1936 - acc: 0.9530Epoch 00618: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1933 - acc: 0.9531 - val_loss: 0.5159 - val_acc: 0.8515 Epoch 620/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1886 - acc: 0.9558Epoch 00619: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1885 - acc: 0.9560 - val_loss: 0.5163 - val_acc: 0.8515 Epoch 621/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1838 - acc: 0.9599Epoch 00620: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1831 - acc: 0.9605 - val_loss: 0.5161 - val_acc: 0.8515 Epoch 622/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1893 - acc: 0.9561Epoch 00621: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1892 - acc: 0.9564 - val_loss: 0.5162 - val_acc: 0.8515 Epoch 623/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1909 - acc: 0.9557Epoch 00622: val_loss improved from 0.51584 to 0.51581, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1903 - acc: 0.9558 - val_loss: 0.5158 - val_acc: 0.8515 Epoch 624/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1889 - acc: 0.9558Epoch 00623: val_loss improved from 0.51581 to 0.51563, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1885 - acc: 0.9560 - val_loss: 0.5156 - val_acc: 0.8503 Epoch 625/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1912 - acc: 0.9572Epoch 00624: val_loss improved from 0.51563 to 0.51547, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1913 - acc: 0.9572 - val_loss: 0.5155 - val_acc: 0.8503 Epoch 626/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1969 - acc: 0.9501Epoch 00625: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1967 - acc: 0.9503 - val_loss: 0.5158 - val_acc: 0.8503 Epoch 627/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1897 - acc: 0.9585Epoch 00626: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1895 - acc: 0.9587 - val_loss: 0.5158 - val_acc: 0.8503 Epoch 628/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1808 - acc: 0.9585Epoch 00627: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1809 - acc: 0.9582 - val_loss: 0.5160 - val_acc: 0.8503 Epoch 629/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1883 - acc: 0.9555Epoch 00628: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1874 - acc: 0.9561 - val_loss: 0.5161 - val_acc: 0.8503 Epoch 630/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1874 - acc: 0.9560Epoch 00629: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1878 - acc: 0.9560 - val_loss: 0.5157 - val_acc: 0.8491 Epoch 631/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1900 - acc: 0.9575Epoch 00630: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1899 - acc: 0.9575 - val_loss: 0.5159 - val_acc: 0.8503 Epoch 632/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1900 - acc: 0.9567Epoch 00631: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1897 - acc: 0.9569 - val_loss: 0.5160 - val_acc: 0.8491 Epoch 633/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1818 - acc: 0.9593Epoch 00632: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1806 - acc: 0.9599 - val_loss: 0.5156 - val_acc: 0.8491 Epoch 634/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1889 - acc: 0.9540Epoch 00633: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1890 - acc: 0.9540 - val_loss: 0.5157 - val_acc: 0.8503 Epoch 635/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1829 - acc: 0.9572Epoch 00634: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1830 - acc: 0.9570 - val_loss: 0.5159 - val_acc: 0.8515 Epoch 636/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1879 - acc: 0.9534Epoch 00635: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1883 - acc: 0.9533 - val_loss: 0.5158 - val_acc: 0.8491 Epoch 637/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1827 - acc: 0.9551Epoch 00636: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1835 - acc: 0.9548 - val_loss: 0.5163 - val_acc: 0.8491 Epoch 638/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1863 - acc: 0.9566Epoch 00637: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1860 - acc: 0.9567 - val_loss: 0.5157 - val_acc: 0.8491 Epoch 639/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1830 - acc: 0.9583Epoch 00638: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1831 - acc: 0.9587 - val_loss: 0.5159 - val_acc: 0.8479 Epoch 640/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1874 - acc: 0.9556Epoch 00639: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1893 - acc: 0.9554 - val_loss: 0.5163 - val_acc: 0.8479 Epoch 641/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1843 - acc: 0.9583Epoch 00640: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1836 - acc: 0.9585 - val_loss: 0.5160 - val_acc: 0.8491 Epoch 642/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1874 - acc: 0.9554Epoch 00641: val_loss improved from 0.51547 to 0.51534, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1872 - acc: 0.9555 - val_loss: 0.5153 - val_acc: 0.8467 Epoch 643/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1944 - acc: 0.9511Epoch 00642: val_loss improved from 0.51534 to 0.51525, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1951 - acc: 0.9504 - val_loss: 0.5152 - val_acc: 0.8491 Epoch 644/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1880 - acc: 0.9556Epoch 00643: val_loss improved from 0.51525 to 0.51522, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1878 - acc: 0.9558 - val_loss: 0.5152 - val_acc: 0.8491 Epoch 645/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1855 - acc: 0.9588Epoch 00644: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1855 - acc: 0.9585 - val_loss: 0.5152 - val_acc: 0.8479 Epoch 646/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1870 - acc: 0.9559Epoch 00645: val_loss improved from 0.51522 to 0.51480, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1871 - acc: 0.9561 - val_loss: 0.5148 - val_acc: 0.8479 Epoch 647/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1872 - acc: 0.9569Epoch 00646: val_loss improved from 0.51480 to 0.51457, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1873 - acc: 0.9567 - val_loss: 0.5146 - val_acc: 0.8503 Epoch 648/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1831 - acc: 0.9594Epoch 00647: val_loss improved from 0.51457 to 0.51448, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1821 - acc: 0.9600 - val_loss: 0.5145 - val_acc: 0.8479 Epoch 649/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1838 - acc: 0.9578Epoch 00648: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1845 - acc: 0.9575 - val_loss: 0.5146 - val_acc: 0.8503 Epoch 650/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1858 - acc: 0.9572Epoch 00649: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1856 - acc: 0.9573 - val_loss: 0.5146 - val_acc: 0.8491 Epoch 651/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1899 - acc: 0.9557Epoch 00650: val_loss improved from 0.51448 to 0.51412, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1898 - acc: 0.9558 - val_loss: 0.5141 - val_acc: 0.8503 Epoch 652/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1836 - acc: 0.9560Epoch 00651: val_loss improved from 0.51412 to 0.51371, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1839 - acc: 0.9560 - val_loss: 0.5137 - val_acc: 0.8515 Epoch 653/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1844 - acc: 0.9579Epoch 00652: val_loss improved from 0.51371 to 0.51355, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1844 - acc: 0.9579 - val_loss: 0.5136 - val_acc: 0.8515 Epoch 654/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1780 - acc: 0.9620Epoch 00653: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1780 - acc: 0.9620 - val_loss: 0.5136 - val_acc: 0.8515 Epoch 655/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1843 - acc: 0.9558Epoch 00654: val_loss improved from 0.51355 to 0.51332, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1842 - acc: 0.9558 - val_loss: 0.5133 - val_acc: 0.8527 Epoch 656/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1759 - acc: 0.9621Epoch 00655: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1760 - acc: 0.9621 - val_loss: 0.5136 - val_acc: 0.8503 Epoch 657/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1836 - acc: 0.9609Epoch 00656: val_loss improved from 0.51332 to 0.51318, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1839 - acc: 0.9608 - val_loss: 0.5132 - val_acc: 0.8527 Epoch 658/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1730 - acc: 0.9603Epoch 00657: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1734 - acc: 0.9603 - val_loss: 0.5135 - val_acc: 0.8539 Epoch 659/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1810 - acc: 0.9584Epoch 00658: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1810 - acc: 0.9584 - val_loss: 0.5137 - val_acc: 0.8539 Epoch 660/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1788 - acc: 0.9603Epoch 00659: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1777 - acc: 0.9608 - val_loss: 0.5135 - val_acc: 0.8515 Epoch 661/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1805 - acc: 0.9608Epoch 00660: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1807 - acc: 0.9608 - val_loss: 0.5135 - val_acc: 0.8503 Epoch 662/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1868 - acc: 0.9558Epoch 00661: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1868 - acc: 0.9558 - val_loss: 0.5137 - val_acc: 0.8503 Epoch 663/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1787 - acc: 0.9576Epoch 00662: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1787 - acc: 0.9581 - val_loss: 0.5139 - val_acc: 0.8515 Epoch 664/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1776 - acc: 0.9602Epoch 00663: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1775 - acc: 0.9602 - val_loss: 0.5138 - val_acc: 0.8527 Epoch 665/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1874 - acc: 0.9567Epoch 00664: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1874 - acc: 0.9567 - val_loss: 0.5134 - val_acc: 0.8503 Epoch 666/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1808 - acc: 0.9573Epoch 00665: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1807 - acc: 0.9575 - val_loss: 0.5133 - val_acc: 0.8479 Epoch 667/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1784 - acc: 0.9602Epoch 00666: val_loss improved from 0.51318 to 0.51250, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1797 - acc: 0.9594 - val_loss: 0.5125 - val_acc: 0.8491 Epoch 668/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1804 - acc: 0.9563Epoch 00667: val_loss improved from 0.51250 to 0.51245, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1801 - acc: 0.9564 - val_loss: 0.5124 - val_acc: 0.8503 Epoch 669/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1800 - acc: 0.9569Epoch 00668: val_loss improved from 0.51245 to 0.51245, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1797 - acc: 0.9570 - val_loss: 0.5124 - val_acc: 0.8503 Epoch 670/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1771 - acc: 0.9585Epoch 00669: val_loss improved from 0.51245 to 0.51243, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1768 - acc: 0.9587 - val_loss: 0.5124 - val_acc: 0.8491 Epoch 671/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1773 - acc: 0.9597Epoch 00670: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1788 - acc: 0.9596 - val_loss: 0.5126 - val_acc: 0.8503 Epoch 672/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1794 - acc: 0.9586Epoch 00671: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1798 - acc: 0.9584 - val_loss: 0.5126 - val_acc: 0.8503 Epoch 673/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1757 - acc: 0.9605Epoch 00672: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1755 - acc: 0.9606 - val_loss: 0.5131 - val_acc: 0.8503 Epoch 674/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1706 - acc: 0.9653Epoch 00673: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1717 - acc: 0.9645 - val_loss: 0.5137 - val_acc: 0.8503 Epoch 675/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1769 - acc: 0.9605Epoch 00674: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1769 - acc: 0.9605 - val_loss: 0.5139 - val_acc: 0.8503 Epoch 676/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1793 - acc: 0.9567Epoch 00675: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1794 - acc: 0.9566 - val_loss: 0.5136 - val_acc: 0.8503 Epoch 677/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1757 - acc: 0.9594Epoch 00676: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1753 - acc: 0.9597 - val_loss: 0.5135 - val_acc: 0.8491 Epoch 678/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1781 - acc: 0.9569Epoch 00677: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1782 - acc: 0.9567 - val_loss: 0.5133 - val_acc: 0.8491 Epoch 679/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1739 - acc: 0.9615Epoch 00678: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1737 - acc: 0.9617 - val_loss: 0.5134 - val_acc: 0.8467 Epoch 680/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1820 - acc: 0.9581Epoch 00679: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1835 - acc: 0.9572 - val_loss: 0.5132 - val_acc: 0.8467 Epoch 681/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1756 - acc: 0.9611Epoch 00680: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1765 - acc: 0.9605 - val_loss: 0.5126 - val_acc: 0.8503 Epoch 682/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1824 - acc: 0.9594Epoch 00681: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1822 - acc: 0.9594 - val_loss: 0.5127 - val_acc: 0.8491 Epoch 683/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1702 - acc: 0.9626Epoch 00682: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1700 - acc: 0.9627 - val_loss: 0.5126 - val_acc: 0.8491 Epoch 684/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1750 - acc: 0.9563Epoch 00683: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1753 - acc: 0.9560 - val_loss: 0.5126 - val_acc: 0.8491 Epoch 685/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1743 - acc: 0.9619Epoch 00684: val_loss improved from 0.51243 to 0.51240, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1751 - acc: 0.9620 - val_loss: 0.5124 - val_acc: 0.8491 Epoch 686/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1815 - acc: 0.9570Epoch 00685: val_loss improved from 0.51240 to 0.51225, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1813 - acc: 0.9572 - val_loss: 0.5123 - val_acc: 0.8515 Epoch 687/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1754 - acc: 0.9587Epoch 00686: val_loss improved from 0.51225 to 0.51205, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1751 - acc: 0.9588 - val_loss: 0.5120 - val_acc: 0.8491 Epoch 688/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1801 - acc: 0.9582Epoch 00687: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1802 - acc: 0.9578 - val_loss: 0.5122 - val_acc: 0.8479 Epoch 689/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1768 - acc: 0.9580Epoch 00688: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1771 - acc: 0.9579 - val_loss: 0.5125 - val_acc: 0.8491 Epoch 690/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1780 - acc: 0.9587Epoch 00689: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1777 - acc: 0.9588 - val_loss: 0.5125 - val_acc: 0.8491 Epoch 691/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1767 - acc: 0.9612Epoch 00690: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1765 - acc: 0.9614 - val_loss: 0.5126 - val_acc: 0.8491 Epoch 692/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1757 - acc: 0.9597Epoch 00691: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1759 - acc: 0.9596 - val_loss: 0.5124 - val_acc: 0.8479 Epoch 693/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1721 - acc: 0.9623Epoch 00692: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1714 - acc: 0.9626 - val_loss: 0.5122 - val_acc: 0.8491 Epoch 694/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1728 - acc: 0.9633Epoch 00693: val_loss improved from 0.51205 to 0.51192, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1731 - acc: 0.9635 - val_loss: 0.5119 - val_acc: 0.8491 Epoch 695/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1781 - acc: 0.9586Epoch 00694: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1775 - acc: 0.9585 - val_loss: 0.5121 - val_acc: 0.8491 Epoch 696/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1731 - acc: 0.9605Epoch 00695: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1748 - acc: 0.9603 - val_loss: 0.5122 - val_acc: 0.8491 Epoch 697/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1725 - acc: 0.9612Epoch 00696: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1725 - acc: 0.9612 - val_loss: 0.5120 - val_acc: 0.8467 Epoch 698/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1739 - acc: 0.9613Epoch 00697: val_loss improved from 0.51192 to 0.51179, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1739 - acc: 0.9615 - val_loss: 0.5118 - val_acc: 0.8479 Epoch 699/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1753 - acc: 0.9567Epoch 00698: val_loss improved from 0.51179 to 0.51158, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1753 - acc: 0.9567 - val_loss: 0.5116 - val_acc: 0.8491 Epoch 700/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1706 - acc: 0.9606Epoch 00699: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1711 - acc: 0.9605 - val_loss: 0.5116 - val_acc: 0.8491 Epoch 701/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1733 - acc: 0.9575Epoch 00700: val_loss improved from 0.51158 to 0.51145, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1731 - acc: 0.9575 - val_loss: 0.5114 - val_acc: 0.8455 Epoch 702/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1732 - acc: 0.9579Epoch 00701: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1732 - acc: 0.9581 - val_loss: 0.5115 - val_acc: 0.8491 Epoch 703/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1746 - acc: 0.9620Epoch 00702: val_loss improved from 0.51145 to 0.51144, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1743 - acc: 0.9621 - val_loss: 0.5114 - val_acc: 0.8491 Epoch 704/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1724 - acc: 0.9578Epoch 00703: val_loss improved from 0.51144 to 0.51133, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1729 - acc: 0.9576 - val_loss: 0.5113 - val_acc: 0.8503 Epoch 705/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1759 - acc: 0.9590Epoch 00704: val_loss improved from 0.51133 to 0.51114, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1755 - acc: 0.9591 - val_loss: 0.5111 - val_acc: 0.8479 Epoch 706/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1696 - acc: 0.9617Epoch 00705: val_loss improved from 0.51114 to 0.51111, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1687 - acc: 0.9626 - val_loss: 0.5111 - val_acc: 0.8503 Epoch 707/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1718 - acc: 0.9623Epoch 00706: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1728 - acc: 0.9621 - val_loss: 0.5112 - val_acc: 0.8491 Epoch 708/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1729 - acc: 0.9626Epoch 00707: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1729 - acc: 0.9626 - val_loss: 0.5113 - val_acc: 0.8491 Epoch 709/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1693 - acc: 0.9632Epoch 00708: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1707 - acc: 0.9623 - val_loss: 0.5117 - val_acc: 0.8491 Epoch 710/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1688 - acc: 0.9648Epoch 00709: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1682 - acc: 0.9651 - val_loss: 0.5118 - val_acc: 0.8515 Epoch 711/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1694 - acc: 0.9597Epoch 00710: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1693 - acc: 0.9597 - val_loss: 0.5119 - val_acc: 0.8527 Epoch 712/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1683 - acc: 0.9623Epoch 00711: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1681 - acc: 0.9624 - val_loss: 0.5122 - val_acc: 0.8515 Epoch 713/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1675 - acc: 0.9639Epoch 00712: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1674 - acc: 0.9639 - val_loss: 0.5121 - val_acc: 0.8503 Epoch 714/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1644 - acc: 0.9648Epoch 00713: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1646 - acc: 0.9647 - val_loss: 0.5115 - val_acc: 0.8491 Epoch 715/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1664 - acc: 0.9651Epoch 00714: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1671 - acc: 0.9647 - val_loss: 0.5113 - val_acc: 0.8503 Epoch 716/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1616 - acc: 0.9626Epoch 00715: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1619 - acc: 0.9626 - val_loss: 0.5113 - val_acc: 0.8515 Epoch 717/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1749 - acc: 0.9573Epoch 00716: val_loss improved from 0.51111 to 0.51087, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1736 - acc: 0.9576 - val_loss: 0.5109 - val_acc: 0.8515 Epoch 718/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1686 - acc: 0.9602Epoch 00717: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1686 - acc: 0.9600 - val_loss: 0.5115 - val_acc: 0.8515 Epoch 719/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1601 - acc: 0.9662Epoch 00718: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1610 - acc: 0.9657 - val_loss: 0.5117 - val_acc: 0.8515 Epoch 720/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1670 - acc: 0.9641Epoch 00719: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1669 - acc: 0.9641 - val_loss: 0.5118 - val_acc: 0.8515 Epoch 721/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1662 - acc: 0.9639Epoch 00720: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1662 - acc: 0.9639 - val_loss: 0.5113 - val_acc: 0.8515 Epoch 722/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1715 - acc: 0.9605Epoch 00721: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1716 - acc: 0.9605 - val_loss: 0.5111 - val_acc: 0.8503 Epoch 723/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1645 - acc: 0.9639Epoch 00722: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1645 - acc: 0.9639 - val_loss: 0.5112 - val_acc: 0.8491 Epoch 724/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1687 - acc: 0.9602Epoch 00723: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1691 - acc: 0.9605 - val_loss: 0.5116 - val_acc: 0.8503 Epoch 725/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1570 - acc: 0.9684Epoch 00724: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1576 - acc: 0.9686 - val_loss: 0.5116 - val_acc: 0.8515 Epoch 726/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1671 - acc: 0.9609Epoch 00725: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1670 - acc: 0.9611 - val_loss: 0.5111 - val_acc: 0.8515 Epoch 727/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1706 - acc: 0.9599Epoch 00726: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1698 - acc: 0.9602 - val_loss: 0.5111 - val_acc: 0.8503 Epoch 728/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1625 - acc: 0.9645Epoch 00727: val_loss improved from 0.51087 to 0.51083, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1634 - acc: 0.9642 - val_loss: 0.5108 - val_acc: 0.8491 Epoch 729/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1647 - acc: 0.9650Epoch 00728: val_loss improved from 0.51083 to 0.51069, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1658 - acc: 0.9647 - val_loss: 0.5107 - val_acc: 0.8491 Epoch 730/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1726 - acc: 0.9626Epoch 00729: val_loss improved from 0.51069 to 0.51063, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1725 - acc: 0.9626 - val_loss: 0.5106 - val_acc: 0.8503 Epoch 731/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1614 - acc: 0.9641Epoch 00730: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1612 - acc: 0.9642 - val_loss: 0.5110 - val_acc: 0.8491 Epoch 732/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1720 - acc: 0.9579Epoch 00731: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1721 - acc: 0.9579 - val_loss: 0.5108 - val_acc: 0.8503 Epoch 733/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1645 - acc: 0.9626Epoch 00732: val_loss improved from 0.51063 to 0.51058, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1642 - acc: 0.9626 - val_loss: 0.5106 - val_acc: 0.8491 Epoch 734/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1649 - acc: 0.9652Epoch 00733: val_loss improved from 0.51058 to 0.51047, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1648 - acc: 0.9651 - val_loss: 0.5105 - val_acc: 0.8491 Epoch 735/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1668 - acc: 0.9617Epoch 00734: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1664 - acc: 0.9618 - val_loss: 0.5112 - val_acc: 0.8491 Epoch 736/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1615 - acc: 0.9620Epoch 00735: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1618 - acc: 0.9618 - val_loss: 0.5111 - val_acc: 0.8551 Epoch 737/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1644 - acc: 0.9617Epoch 00736: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1643 - acc: 0.9617 - val_loss: 0.5112 - val_acc: 0.8527 Epoch 738/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1659 - acc: 0.9651Epoch 00737: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1657 - acc: 0.9653 - val_loss: 0.5113 - val_acc: 0.8515 Epoch 739/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1659 - acc: 0.9608Epoch 00738: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1655 - acc: 0.9614 - val_loss: 0.5112 - val_acc: 0.8515 Epoch 740/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1700 - acc: 0.9580Epoch 00739: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1703 - acc: 0.9579 - val_loss: 0.5105 - val_acc: 0.8515 Epoch 741/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1655 - acc: 0.9628Epoch 00740: val_loss improved from 0.51047 to 0.51029, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1659 - acc: 0.9624 - val_loss: 0.5103 - val_acc: 0.8491 Epoch 742/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1595 - acc: 0.9644Epoch 00741: val_loss improved from 0.51029 to 0.51004, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1602 - acc: 0.9644 - val_loss: 0.5100 - val_acc: 0.8503 Epoch 743/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1644 - acc: 0.9615Epoch 00742: val_loss improved from 0.51004 to 0.51004, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1644 - acc: 0.9615 - val_loss: 0.5100 - val_acc: 0.8527 Epoch 744/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1574 - acc: 0.9680Epoch 00743: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1575 - acc: 0.9677 - val_loss: 0.5103 - val_acc: 0.8527 Epoch 745/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1632 - acc: 0.9650Epoch 00744: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1639 - acc: 0.9645 - val_loss: 0.5103 - val_acc: 0.8503 Epoch 746/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1666 - acc: 0.9620Epoch 00745: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1665 - acc: 0.9621 - val_loss: 0.5107 - val_acc: 0.8491 Epoch 747/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1646 - acc: 0.9632Epoch 00746: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1650 - acc: 0.9633 - val_loss: 0.5106 - val_acc: 0.8515 Epoch 748/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1694 - acc: 0.9597Epoch 00747: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1678 - acc: 0.9606 - val_loss: 0.5108 - val_acc: 0.8527 Epoch 749/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1636 - acc: 0.9632Epoch 00748: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1639 - acc: 0.9629 - val_loss: 0.5105 - val_acc: 0.8527 Epoch 750/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1628 - acc: 0.9648Epoch 00749: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1626 - acc: 0.9650 - val_loss: 0.5104 - val_acc: 0.8539 Epoch 751/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1641 - acc: 0.9630Epoch 00750: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1631 - acc: 0.9636 - val_loss: 0.5103 - val_acc: 0.8503 Epoch 752/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1576 - acc: 0.9648Epoch 00751: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1580 - acc: 0.9645 - val_loss: 0.5107 - val_acc: 0.8503 Epoch 753/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1693 - acc: 0.9623Epoch 00752: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1685 - acc: 0.9629 - val_loss: 0.5103 - val_acc: 0.8515 Epoch 754/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1613 - acc: 0.9683Epoch 00753: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1620 - acc: 0.9678 - val_loss: 0.5103 - val_acc: 0.8515 Epoch 755/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1606 - acc: 0.9645Epoch 00754: val_loss improved from 0.51004 to 0.50995, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1600 - acc: 0.9645 - val_loss: 0.5100 - val_acc: 0.8503 Epoch 756/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1619 - acc: 0.9664Epoch 00755: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1624 - acc: 0.9662 - val_loss: 0.5105 - val_acc: 0.8515 Epoch 757/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1639 - acc: 0.9596Epoch 00756: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1643 - acc: 0.9594 - val_loss: 0.5109 - val_acc: 0.8527 Epoch 758/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1616 - acc: 0.9641Epoch 00757: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1618 - acc: 0.9641 - val_loss: 0.5108 - val_acc: 0.8527 Epoch 759/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1647 - acc: 0.9605Epoch 00758: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1651 - acc: 0.9605 - val_loss: 0.5106 - val_acc: 0.8515 Epoch 760/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1626 - acc: 0.9629Epoch 00759: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1627 - acc: 0.9629 - val_loss: 0.5101 - val_acc: 0.8515 Epoch 761/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1613 - acc: 0.9638Epoch 00760: val_loss improved from 0.50995 to 0.50927, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1611 - acc: 0.9638 - val_loss: 0.5093 - val_acc: 0.8527 Epoch 762/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1630 - acc: 0.9638Epoch 00761: val_loss improved from 0.50927 to 0.50904, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1624 - acc: 0.9642 - val_loss: 0.5090 - val_acc: 0.8515 Epoch 763/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1625 - acc: 0.9617Epoch 00762: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1613 - acc: 0.9623 - val_loss: 0.5091 - val_acc: 0.8539 Epoch 764/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1642 - acc: 0.9616Epoch 00763: val_loss improved from 0.50904 to 0.50898, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1640 - acc: 0.9620 - val_loss: 0.5090 - val_acc: 0.8503 Epoch 765/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1563 - acc: 0.9654Epoch 00764: val_loss improved from 0.50898 to 0.50886, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1561 - acc: 0.9656 - val_loss: 0.5089 - val_acc: 0.8515 Epoch 766/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1630 - acc: 0.9627Epoch 00765: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1634 - acc: 0.9627 - val_loss: 0.5090 - val_acc: 0.8527 Epoch 767/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1600 - acc: 0.9642Epoch 00766: val_loss improved from 0.50886 to 0.50873, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1594 - acc: 0.9642 - val_loss: 0.5087 - val_acc: 0.8491 Epoch 768/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1586 - acc: 0.9657Epoch 00767: val_loss improved from 0.50873 to 0.50844, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1583 - acc: 0.9659 - val_loss: 0.5084 - val_acc: 0.8515 Epoch 769/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1543 - acc: 0.9639Epoch 00768: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1547 - acc: 0.9639 - val_loss: 0.5086 - val_acc: 0.8515 Epoch 770/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1609 - acc: 0.9632Epoch 00769: val_loss improved from 0.50844 to 0.50836, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1607 - acc: 0.9633 - val_loss: 0.5084 - val_acc: 0.8515 Epoch 771/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1585 - acc: 0.9645Epoch 00770: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1586 - acc: 0.9644 - val_loss: 0.5089 - val_acc: 0.8515 Epoch 772/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1553 - acc: 0.9674Epoch 00771: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1551 - acc: 0.9675 - val_loss: 0.5089 - val_acc: 0.8503 Epoch 773/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1644 - acc: 0.9636Epoch 00772: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1644 - acc: 0.9636 - val_loss: 0.5096 - val_acc: 0.8503 Epoch 774/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1537 - acc: 0.9677Epoch 00773: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1536 - acc: 0.9678 - val_loss: 0.5095 - val_acc: 0.8503 Epoch 775/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1574 - acc: 0.9642Epoch 00774: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1577 - acc: 0.9644 - val_loss: 0.5095 - val_acc: 0.8539 Epoch 776/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1563 - acc: 0.9668Epoch 00775: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1561 - acc: 0.9669 - val_loss: 0.5100 - val_acc: 0.8515 Epoch 777/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1590 - acc: 0.9628Epoch 00776: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1582 - acc: 0.9632 - val_loss: 0.5097 - val_acc: 0.8479 Epoch 778/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1576 - acc: 0.9654Epoch 00777: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1580 - acc: 0.9653 - val_loss: 0.5096 - val_acc: 0.8491 Epoch 779/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1651 - acc: 0.9625Epoch 00778: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1630 - acc: 0.9633 - val_loss: 0.5098 - val_acc: 0.8503 Epoch 780/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1605 - acc: 0.9645Epoch 00779: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1610 - acc: 0.9644 - val_loss: 0.5098 - val_acc: 0.8515 Epoch 781/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1599 - acc: 0.9654Epoch 00780: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1602 - acc: 0.9653 - val_loss: 0.5098 - val_acc: 0.8491 Epoch 782/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1562 - acc: 0.9674Epoch 00781: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1559 - acc: 0.9675 - val_loss: 0.5097 - val_acc: 0.8479 Epoch 783/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1556 - acc: 0.9661Epoch 00782: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1559 - acc: 0.9659 - val_loss: 0.5100 - val_acc: 0.8479 Epoch 784/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1538 - acc: 0.9655Epoch 00783: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1539 - acc: 0.9656 - val_loss: 0.5101 - val_acc: 0.8491 Epoch 785/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1609 - acc: 0.9645Epoch 00784: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1609 - acc: 0.9645 - val_loss: 0.5100 - val_acc: 0.8467 Epoch 786/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1533 - acc: 0.9672Epoch 00785: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1540 - acc: 0.9665 - val_loss: 0.5096 - val_acc: 0.8479 Epoch 787/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1572 - acc: 0.9661Epoch 00786: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1572 - acc: 0.9665 - val_loss: 0.5095 - val_acc: 0.8479 Epoch 788/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1577 - acc: 0.9648Epoch 00787: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1591 - acc: 0.9641 - val_loss: 0.5097 - val_acc: 0.8491 Epoch 789/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1498 - acc: 0.9692Epoch 00788: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1500 - acc: 0.9692 - val_loss: 0.5098 - val_acc: 0.8515 Epoch 790/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1547 - acc: 0.9661Epoch 00789: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1556 - acc: 0.9660 - val_loss: 0.5098 - val_acc: 0.8515 Epoch 791/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1538 - acc: 0.9663Epoch 00790: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1541 - acc: 0.9666 - val_loss: 0.5101 - val_acc: 0.8503 Epoch 792/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1540 - acc: 0.9672Epoch 00791: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1536 - acc: 0.9675 - val_loss: 0.5097 - val_acc: 0.8515 Epoch 793/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1527 - acc: 0.9664Epoch 00792: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1540 - acc: 0.9654 - val_loss: 0.5096 - val_acc: 0.8479 Epoch 794/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1524 - acc: 0.9668Epoch 00793: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1528 - acc: 0.9666 - val_loss: 0.5097 - val_acc: 0.8467 Epoch 795/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1562 - acc: 0.9684Epoch 00794: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1559 - acc: 0.9686 - val_loss: 0.5097 - val_acc: 0.8467 Epoch 796/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1581 - acc: 0.9649Epoch 00795: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1580 - acc: 0.9648 - val_loss: 0.5104 - val_acc: 0.8479 Epoch 797/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1567 - acc: 0.9668Epoch 00796: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1572 - acc: 0.9666 - val_loss: 0.5108 - val_acc: 0.8467 Epoch 798/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1589 - acc: 0.9642Epoch 00797: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1592 - acc: 0.9641 - val_loss: 0.5107 - val_acc: 0.8491 Epoch 799/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1585 - acc: 0.9637Epoch 00798: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1574 - acc: 0.9642 - val_loss: 0.5107 - val_acc: 0.8491 Epoch 800/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1557 - acc: 0.9644Epoch 00799: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1565 - acc: 0.9645 - val_loss: 0.5106 - val_acc: 0.8491 Epoch 801/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1564 - acc: 0.9660Epoch 00800: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1564 - acc: 0.9660 - val_loss: 0.5104 - val_acc: 0.8479 Epoch 802/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1509 - acc: 0.9678Epoch 00801: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1508 - acc: 0.9680 - val_loss: 0.5100 - val_acc: 0.8479 Epoch 803/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1518 - acc: 0.9677Epoch 00802: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1518 - acc: 0.9678 - val_loss: 0.5096 - val_acc: 0.8527 Epoch 804/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1559 - acc: 0.9668Epoch 00803: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1557 - acc: 0.9668 - val_loss: 0.5097 - val_acc: 0.8515 Epoch 805/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1541 - acc: 0.9678Epoch 00804: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1539 - acc: 0.9681 - val_loss: 0.5097 - val_acc: 0.8491 Epoch 806/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1534 - acc: 0.9663Epoch 00805: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1537 - acc: 0.9662 - val_loss: 0.5094 - val_acc: 0.8503 Epoch 807/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1578 - acc: 0.9648Epoch 00806: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1575 - acc: 0.9650 - val_loss: 0.5090 - val_acc: 0.8503 Epoch 808/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1552 - acc: 0.9668Epoch 00807: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1557 - acc: 0.9665 - val_loss: 0.5088 - val_acc: 0.8503 Epoch 809/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1524 - acc: 0.9674Epoch 00808: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1523 - acc: 0.9675 - val_loss: 0.5086 - val_acc: 0.8503 Epoch 810/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1493 - acc: 0.9691Epoch 00809: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1494 - acc: 0.9690 - val_loss: 0.5088 - val_acc: 0.8515 Epoch 811/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1529 - acc: 0.9657Epoch 00810: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1535 - acc: 0.9653 - val_loss: 0.5091 - val_acc: 0.8503 Epoch 812/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1567 - acc: 0.9639Epoch 00811: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1566 - acc: 0.9641 - val_loss: 0.5093 - val_acc: 0.8503 Epoch 813/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1504 - acc: 0.9665Epoch 00812: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1501 - acc: 0.9666 - val_loss: 0.5091 - val_acc: 0.8503 Epoch 814/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1516 - acc: 0.9655Epoch 00813: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1519 - acc: 0.9657 - val_loss: 0.5093 - val_acc: 0.8527 Epoch 815/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1464 - acc: 0.9684Epoch 00814: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1468 - acc: 0.9684 - val_loss: 0.5094 - val_acc: 0.8527 Epoch 816/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1500 - acc: 0.9680Epoch 00815: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1498 - acc: 0.9681 - val_loss: 0.5095 - val_acc: 0.8515 Epoch 817/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1527 - acc: 0.9675Epoch 00816: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1530 - acc: 0.9672 - val_loss: 0.5094 - val_acc: 0.8527 Epoch 818/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1555 - acc: 0.9657Epoch 00817: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1554 - acc: 0.9657 - val_loss: 0.5092 - val_acc: 0.8527 Epoch 819/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1485 - acc: 0.9684Epoch 00818: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1494 - acc: 0.9681 - val_loss: 0.5091 - val_acc: 0.8515 Epoch 820/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1527 - acc: 0.9667Epoch 00819: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1529 - acc: 0.9668 - val_loss: 0.5089 - val_acc: 0.8551 Epoch 821/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1561 - acc: 0.9650Epoch 00820: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1559 - acc: 0.9651 - val_loss: 0.5085 - val_acc: 0.8539 Epoch 822/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1491 - acc: 0.9669Epoch 00821: val_loss improved from 0.50836 to 0.50807, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1489 - acc: 0.9671 - val_loss: 0.5081 - val_acc: 0.8551 Epoch 823/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1538 - acc: 0.9678Epoch 00822: val_loss improved from 0.50807 to 0.50805, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1535 - acc: 0.9680 - val_loss: 0.5080 - val_acc: 0.8527 Epoch 824/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1591 - acc: 0.9637Epoch 00823: val_loss improved from 0.50805 to 0.50785, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1601 - acc: 0.9630 - val_loss: 0.5078 - val_acc: 0.8515 Epoch 825/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1540 - acc: 0.9662Epoch 00824: val_loss improved from 0.50785 to 0.50767, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1538 - acc: 0.9663 - val_loss: 0.5077 - val_acc: 0.8515 Epoch 826/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1481 - acc: 0.9666Epoch 00825: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1487 - acc: 0.9666 - val_loss: 0.5079 - val_acc: 0.8527 Epoch 827/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1561 - acc: 0.9680Epoch 00826: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1560 - acc: 0.9681 - val_loss: 0.5081 - val_acc: 0.8515 Epoch 828/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1469 - acc: 0.9695Epoch 00827: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1469 - acc: 0.9695 - val_loss: 0.5084 - val_acc: 0.8539 Epoch 829/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1517 - acc: 0.9636Epoch 00828: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1508 - acc: 0.9644 - val_loss: 0.5083 - val_acc: 0.8515 Epoch 830/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1547 - acc: 0.9669Epoch 00829: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1550 - acc: 0.9668 - val_loss: 0.5086 - val_acc: 0.8515 Epoch 831/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1515 - acc: 0.9680Epoch 00830: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1516 - acc: 0.9681 - val_loss: 0.5089 - val_acc: 0.8539 Epoch 832/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1560 - acc: 0.9647Epoch 00831: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1561 - acc: 0.9647 - val_loss: 0.5095 - val_acc: 0.8539 Epoch 833/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1540 - acc: 0.9666Epoch 00832: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1559 - acc: 0.9659 - val_loss: 0.5092 - val_acc: 0.8527 Epoch 834/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1531 - acc: 0.9636Epoch 00833: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1545 - acc: 0.9636 - val_loss: 0.5092 - val_acc: 0.8527 Epoch 835/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1468 - acc: 0.9686Epoch 00834: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1465 - acc: 0.9684 - val_loss: 0.5087 - val_acc: 0.8527 Epoch 836/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1531 - acc: 0.9675Epoch 00835: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1536 - acc: 0.9675 - val_loss: 0.5088 - val_acc: 0.8539 Epoch 837/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1499 - acc: 0.9667Epoch 00836: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1505 - acc: 0.9666 - val_loss: 0.5091 - val_acc: 0.8503 Epoch 838/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1515 - acc: 0.9648Epoch 00837: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1513 - acc: 0.9650 - val_loss: 0.5091 - val_acc: 0.8491 Epoch 839/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1501 - acc: 0.9677Epoch 00838: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1499 - acc: 0.9678 - val_loss: 0.5086 - val_acc: 0.8503 Epoch 840/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1496 - acc: 0.9669Epoch 00839: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1492 - acc: 0.9672 - val_loss: 0.5086 - val_acc: 0.8515 Epoch 841/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1470 - acc: 0.9706Epoch 00840: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1470 - acc: 0.9707 - val_loss: 0.5085 - val_acc: 0.8515 Epoch 842/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1494 - acc: 0.9672Epoch 00841: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1506 - acc: 0.9665 - val_loss: 0.5083 - val_acc: 0.8527 Epoch 843/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1419 - acc: 0.9718Epoch 00842: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1419 - acc: 0.9719 - val_loss: 0.5084 - val_acc: 0.8515 Epoch 844/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1475 - acc: 0.9692Epoch 00843: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1475 - acc: 0.9689 - val_loss: 0.5086 - val_acc: 0.8515 Epoch 845/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1488 - acc: 0.9681Epoch 00844: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1485 - acc: 0.9683 - val_loss: 0.5085 - val_acc: 0.8503 Epoch 846/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1483 - acc: 0.9680Epoch 00845: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1484 - acc: 0.9680 - val_loss: 0.5083 - val_acc: 0.8515 Epoch 847/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1461 - acc: 0.9681Epoch 00846: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1461 - acc: 0.9681 - val_loss: 0.5081 - val_acc: 0.8503 Epoch 848/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1460 - acc: 0.9695Epoch 00847: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1460 - acc: 0.9690 - val_loss: 0.5078 - val_acc: 0.8515 Epoch 849/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1455 - acc: 0.9681Epoch 00848: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1464 - acc: 0.9681 - val_loss: 0.5080 - val_acc: 0.8527 Epoch 850/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1505 - acc: 0.9663Epoch 00849: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1503 - acc: 0.9665 - val_loss: 0.5080 - val_acc: 0.8515 Epoch 851/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1480 - acc: 0.9686Epoch 00850: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1479 - acc: 0.9686 - val_loss: 0.5080 - val_acc: 0.8479 Epoch 852/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1475 - acc: 0.9698Epoch 00851: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1474 - acc: 0.9699 - val_loss: 0.5082 - val_acc: 0.8479 Epoch 853/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1410 - acc: 0.9711Epoch 00852: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1415 - acc: 0.9711 - val_loss: 0.5082 - val_acc: 0.8503 Epoch 854/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1508 - acc: 0.9683Epoch 00853: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1502 - acc: 0.9686 - val_loss: 0.5079 - val_acc: 0.8515 Epoch 855/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1466 - acc: 0.9695Epoch 00854: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1475 - acc: 0.9692 - val_loss: 0.5080 - val_acc: 0.8527 Epoch 856/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1470 - acc: 0.9678Epoch 00855: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1469 - acc: 0.9678 - val_loss: 0.5081 - val_acc: 0.8515 Epoch 857/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1451 - acc: 0.9692Epoch 00856: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1449 - acc: 0.9693 - val_loss: 0.5084 - val_acc: 0.8515 Epoch 858/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1476 - acc: 0.9656Epoch 00857: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1483 - acc: 0.9650 - val_loss: 0.5086 - val_acc: 0.8515 Epoch 859/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1463 - acc: 0.9686Epoch 00858: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1458 - acc: 0.9687 - val_loss: 0.5080 - val_acc: 0.8503 Epoch 860/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1493 - acc: 0.9666Epoch 00859: val_loss improved from 0.50767 to 0.50760, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1495 - acc: 0.9666 - val_loss: 0.5076 - val_acc: 0.8491 Epoch 861/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1482 - acc: 0.9653Epoch 00860: val_loss improved from 0.50760 to 0.50736, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1478 - acc: 0.9654 - val_loss: 0.5074 - val_acc: 0.8491 Epoch 862/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1508 - acc: 0.9654Epoch 00861: val_loss improved from 0.50736 to 0.50706, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1505 - acc: 0.9656 - val_loss: 0.5071 - val_acc: 0.8503 Epoch 863/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1443 - acc: 0.9712Epoch 00862: val_loss improved from 0.50706 to 0.50668, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1443 - acc: 0.9711 - val_loss: 0.5067 - val_acc: 0.8491 Epoch 864/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1457 - acc: 0.9698Epoch 00863: val_loss improved from 0.50668 to 0.50656, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1453 - acc: 0.9702 - val_loss: 0.5066 - val_acc: 0.8515 Epoch 865/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1448 - acc: 0.9677Epoch 00864: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1448 - acc: 0.9677 - val_loss: 0.5069 - val_acc: 0.8515 Epoch 866/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1467 - acc: 0.9671Epoch 00865: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1466 - acc: 0.9672 - val_loss: 0.5069 - val_acc: 0.8551 Epoch 867/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1454 - acc: 0.9671Epoch 00866: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1458 - acc: 0.9666 - val_loss: 0.5070 - val_acc: 0.8527 Epoch 868/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1435 - acc: 0.9680Epoch 00867: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1437 - acc: 0.9678 - val_loss: 0.5069 - val_acc: 0.8539 Epoch 869/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1503 - acc: 0.9670Epoch 00868: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1496 - acc: 0.9671 - val_loss: 0.5071 - val_acc: 0.8527 Epoch 870/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1398 - acc: 0.9723Epoch 00869: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1408 - acc: 0.9716 - val_loss: 0.5073 - val_acc: 0.8527 Epoch 871/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1395 - acc: 0.9719Epoch 00870: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1394 - acc: 0.9720 - val_loss: 0.5072 - val_acc: 0.8551 Epoch 872/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1502 - acc: 0.9653Epoch 00871: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1496 - acc: 0.9657 - val_loss: 0.5068 - val_acc: 0.8527 Epoch 873/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1427 - acc: 0.9691Epoch 00872: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1424 - acc: 0.9690 - val_loss: 0.5068 - val_acc: 0.8551 Epoch 874/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1465 - acc: 0.9698Epoch 00873: val_loss improved from 0.50656 to 0.50652, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1460 - acc: 0.9699 - val_loss: 0.5065 - val_acc: 0.8539 Epoch 875/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1487 - acc: 0.9677Epoch 00874: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1495 - acc: 0.9668 - val_loss: 0.5067 - val_acc: 0.8551 Epoch 876/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1428 - acc: 0.9675Epoch 00875: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1425 - acc: 0.9677 - val_loss: 0.5070 - val_acc: 0.8539 Epoch 877/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1393 - acc: 0.9686Epoch 00876: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1396 - acc: 0.9686 - val_loss: 0.5067 - val_acc: 0.8539 Epoch 878/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1436 - acc: 0.9686Epoch 00877: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1446 - acc: 0.9677 - val_loss: 0.5066 - val_acc: 0.8527 Epoch 879/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1454 - acc: 0.9689Epoch 00878: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1444 - acc: 0.9693 - val_loss: 0.5071 - val_acc: 0.8551 Epoch 880/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1449 - acc: 0.9670Epoch 00879: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1445 - acc: 0.9675 - val_loss: 0.5070 - val_acc: 0.8527 Epoch 881/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1424 - acc: 0.9694Epoch 00880: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1425 - acc: 0.9692 - val_loss: 0.5069 - val_acc: 0.8527 Epoch 882/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1389 - acc: 0.9731Epoch 00881: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1392 - acc: 0.9729 - val_loss: 0.5070 - val_acc: 0.8515 Epoch 883/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1451 - acc: 0.9691Epoch 00882: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1450 - acc: 0.9689 - val_loss: 0.5068 - val_acc: 0.8527 Epoch 884/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1448 - acc: 0.9710Epoch 00883: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1440 - acc: 0.9717 - val_loss: 0.5067 - val_acc: 0.8539 Epoch 885/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1436 - acc: 0.9703Epoch 00884: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1438 - acc: 0.9702 - val_loss: 0.5068 - val_acc: 0.8539 Epoch 886/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1413 - acc: 0.9686Epoch 00885: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1416 - acc: 0.9683 - val_loss: 0.5068 - val_acc: 0.8527 Epoch 887/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1368 - acc: 0.9714Epoch 00886: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1383 - acc: 0.9704 - val_loss: 0.5068 - val_acc: 0.8539 Epoch 888/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1373 - acc: 0.9720Epoch 00887: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1372 - acc: 0.9720 - val_loss: 0.5068 - val_acc: 0.8527 Epoch 889/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1406 - acc: 0.9680Epoch 00888: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1408 - acc: 0.9678 - val_loss: 0.5070 - val_acc: 0.8539 Epoch 890/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1403 - acc: 0.9708Epoch 00889: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1412 - acc: 0.9710 - val_loss: 0.5070 - val_acc: 0.8539 Epoch 891/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1436 - acc: 0.9681Epoch 00890: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1441 - acc: 0.9678 - val_loss: 0.5073 - val_acc: 0.8527 Epoch 892/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1414 - acc: 0.9724Epoch 00891: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1412 - acc: 0.9725 - val_loss: 0.5071 - val_acc: 0.8539 Epoch 893/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1393 - acc: 0.9674Epoch 00892: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1394 - acc: 0.9675 - val_loss: 0.5072 - val_acc: 0.8539 Epoch 894/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1402 - acc: 0.9695Epoch 00893: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1411 - acc: 0.9690 - val_loss: 0.5067 - val_acc: 0.8551 Epoch 895/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1393 - acc: 0.9738Epoch 00894: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1391 - acc: 0.9734 - val_loss: 0.5069 - val_acc: 0.8527 Epoch 896/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1386 - acc: 0.9716Epoch 00895: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1387 - acc: 0.9716 - val_loss: 0.5067 - val_acc: 0.8527 Epoch 897/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1404 - acc: 0.9716Epoch 00896: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1402 - acc: 0.9717 - val_loss: 0.5070 - val_acc: 0.8527 Epoch 898/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1431 - acc: 0.9689Epoch 00897: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1429 - acc: 0.9690 - val_loss: 0.5070 - val_acc: 0.8527 Epoch 899/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1379 - acc: 0.9733Epoch 00898: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1379 - acc: 0.9731 - val_loss: 0.5068 - val_acc: 0.8527 Epoch 900/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1384 - acc: 0.9711Epoch 00899: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1373 - acc: 0.9716 - val_loss: 0.5066 - val_acc: 0.8527 Epoch 901/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1440 - acc: 0.9677Epoch 00900: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1443 - acc: 0.9674 - val_loss: 0.5067 - val_acc: 0.8539 Epoch 902/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1397 - acc: 0.9688Epoch 00901: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1385 - acc: 0.9695 - val_loss: 0.5066 - val_acc: 0.8539 Epoch 903/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1350 - acc: 0.9730Epoch 00902: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1349 - acc: 0.9731 - val_loss: 0.5066 - val_acc: 0.8527 Epoch 904/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1424 - acc: 0.9686Epoch 00903: val_loss improved from 0.50652 to 0.50627, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1425 - acc: 0.9686 - val_loss: 0.5063 - val_acc: 0.8539 Epoch 905/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1399 - acc: 0.9703Epoch 00904: val_loss improved from 0.50627 to 0.50599, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1397 - acc: 0.9704 - val_loss: 0.5060 - val_acc: 0.8515 Epoch 906/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1502 - acc: 0.9636Epoch 00905: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1504 - acc: 0.9635 - val_loss: 0.5064 - val_acc: 0.8539 Epoch 907/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1400 - acc: 0.9706Epoch 00906: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1402 - acc: 0.9705 - val_loss: 0.5067 - val_acc: 0.8527 Epoch 908/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1380 - acc: 0.9709Epoch 00907: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1381 - acc: 0.9710 - val_loss: 0.5066 - val_acc: 0.8539 Epoch 909/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1379 - acc: 0.9697- ETA: 1s - loss: Epoch 00908: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1383 - acc: 0.9696 - val_loss: 0.5061 - val_acc: 0.8527 Epoch 910/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1371 - acc: 0.9714- ETA: 0s - loss: 0.141Epoch 00909: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1379 - acc: 0.9708 - val_loss: 0.5062 - val_acc: 0.8527 Epoch 911/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1408 - acc: 0.9716Epoch 00910: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1410 - acc: 0.9716 - val_loss: 0.5063 - val_acc: 0.8527 Epoch 912/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1375 - acc: 0.9710- ETA: 0s - loss: 0.1382 - accEpoch 00911: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1374 - acc: 0.9711 - val_loss: 0.5063 - val_acc: 0.8527 Epoch 913/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1346 - acc: 0.9730Epoch 00912: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1346 - acc: 0.9729 - val_loss: 0.5063 - val_acc: 0.8551 Epoch 914/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1350 - acc: 0.9724Epoch 00913: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1348 - acc: 0.9725 - val_loss: 0.5066 - val_acc: 0.8503 Epoch 915/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1413 - acc: 0.9716Epoch 00914: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1401 - acc: 0.9719 - val_loss: 0.5065 - val_acc: 0.8539 Epoch 916/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1356 - acc: 0.9733Epoch 00915: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1354 - acc: 0.9728 - val_loss: 0.5066 - val_acc: 0.8539 Epoch 917/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1358 - acc: 0.9728Epoch 00916: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1342 - acc: 0.9734 - val_loss: 0.5067 - val_acc: 0.8551 Epoch 918/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1391 - acc: 0.9705Epoch 00917: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1388 - acc: 0.9702 - val_loss: 0.5070 - val_acc: 0.8527 Epoch 919/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1349 - acc: 0.9737Epoch 00918: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1348 - acc: 0.9737 - val_loss: 0.5067 - val_acc: 0.8539 Epoch 920/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1422 - acc: 0.9674Epoch 00919: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1423 - acc: 0.9674 - val_loss: 0.5066 - val_acc: 0.8551 Epoch 921/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1310 - acc: 0.9739Epoch 00920: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1315 - acc: 0.9737 - val_loss: 0.5063 - val_acc: 0.8539 Epoch 922/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1337 - acc: 0.9743Epoch 00921: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1337 - acc: 0.9744 - val_loss: 0.5066 - val_acc: 0.8539 Epoch 923/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1370 - acc: 0.9689Epoch 00922: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1379 - acc: 0.9684 - val_loss: 0.5062 - val_acc: 0.8527 Epoch 924/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1334 - acc: 0.9730Epoch 00923: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1333 - acc: 0.9729 - val_loss: 0.5061 - val_acc: 0.8515 Epoch 925/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1419 - acc: 0.9678Epoch 00924: val_loss improved from 0.50599 to 0.50558, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1418 - acc: 0.9680 - val_loss: 0.5056 - val_acc: 0.8515 Epoch 926/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1402 - acc: 0.9722Epoch 00925: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1400 - acc: 0.9723 - val_loss: 0.5057 - val_acc: 0.8515 Epoch 927/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1355 - acc: 0.9698Epoch 00926: val_loss improved from 0.50558 to 0.50553, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1352 - acc: 0.9699 - val_loss: 0.5055 - val_acc: 0.8515 Epoch 928/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1390 - acc: 0.9718Epoch 00927: val_loss improved from 0.50553 to 0.50531, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1386 - acc: 0.9717 - val_loss: 0.5053 - val_acc: 0.8503 Epoch 929/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1349 - acc: 0.9697Epoch 00928: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1350 - acc: 0.9698 - val_loss: 0.5054 - val_acc: 0.8515 Epoch 930/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1347 - acc: 0.9729Epoch 00929: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1361 - acc: 0.9726 - val_loss: 0.5056 - val_acc: 0.8527 Epoch 931/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1373 - acc: 0.9721Epoch 00930: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1390 - acc: 0.9714 - val_loss: 0.5056 - val_acc: 0.8527 Epoch 932/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1340 - acc: 0.9711Epoch 00931: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1344 - acc: 0.9710 - val_loss: 0.5056 - val_acc: 0.8527 Epoch 933/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1340 - acc: 0.9725Epoch 00932: val_loss improved from 0.50531 to 0.50520, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1340 - acc: 0.9726 - val_loss: 0.5052 - val_acc: 0.8527 Epoch 934/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1333 - acc: 0.9727Epoch 00933: val_loss improved from 0.50520 to 0.50505, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1333 - acc: 0.9725 - val_loss: 0.5051 - val_acc: 0.8527 Epoch 935/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1323 - acc: 0.9737Epoch 00934: val_loss improved from 0.50505 to 0.50497, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1325 - acc: 0.9732 - val_loss: 0.5050 - val_acc: 0.8539 Epoch 936/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1370 - acc: 0.9707Epoch 00935: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1371 - acc: 0.9705 - val_loss: 0.5052 - val_acc: 0.8563 Epoch 937/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1323 - acc: 0.9718Epoch 00936: val_loss improved from 0.50497 to 0.50480, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1322 - acc: 0.9719 - val_loss: 0.5048 - val_acc: 0.8563 Epoch 938/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1340 - acc: 0.9712Epoch 00937: val_loss improved from 0.50480 to 0.50445, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1343 - acc: 0.9710 - val_loss: 0.5044 - val_acc: 0.8575 Epoch 939/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1354 - acc: 0.9694Epoch 00938: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1360 - acc: 0.9693 - val_loss: 0.5046 - val_acc: 0.8539 Epoch 940/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1342 - acc: 0.9722Epoch 00939: val_loss improved from 0.50445 to 0.50438, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1352 - acc: 0.9716 - val_loss: 0.5044 - val_acc: 0.8575 Epoch 941/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1338 - acc: 0.9734Epoch 00940: val_loss improved from 0.50438 to 0.50436, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1337 - acc: 0.9735 - val_loss: 0.5044 - val_acc: 0.8551 Epoch 942/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1299 - acc: 0.9737Epoch 00941: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1306 - acc: 0.9732 - val_loss: 0.5048 - val_acc: 0.8539 Epoch 943/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1339 - acc: 0.9726Epoch 00942: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1336 - acc: 0.9726 - val_loss: 0.5052 - val_acc: 0.8563 Epoch 944/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1277 - acc: 0.9730Epoch 00943: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1277 - acc: 0.9729 - val_loss: 0.5054 - val_acc: 0.8551 Epoch 945/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1407 - acc: 0.9694Epoch 00944: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1408 - acc: 0.9695 - val_loss: 0.5056 - val_acc: 0.8551 Epoch 946/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1343 - acc: 0.9729Epoch 00945: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1339 - acc: 0.9729 - val_loss: 0.5058 - val_acc: 0.8563 Epoch 947/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1317 - acc: 0.9716Epoch 00946: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1315 - acc: 0.9717 - val_loss: 0.5060 - val_acc: 0.8563 Epoch 948/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1331 - acc: 0.9731Epoch 00947: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1329 - acc: 0.9732 - val_loss: 0.5056 - val_acc: 0.8563 Epoch 949/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1334 - acc: 0.9718Epoch 00948: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1339 - acc: 0.9716 - val_loss: 0.5056 - val_acc: 0.8563 Epoch 950/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1333 - acc: 0.9734Epoch 00949: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1332 - acc: 0.9735 - val_loss: 0.5060 - val_acc: 0.8551 Epoch 951/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1338 - acc: 0.9734Epoch 00950: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1343 - acc: 0.9737 - val_loss: 0.5060 - val_acc: 0.8527 Epoch 952/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1277 - acc: 0.9736Epoch 00951: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1277 - acc: 0.9737 - val_loss: 0.5060 - val_acc: 0.8539 Epoch 953/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1331 - acc: 0.9742Epoch 00952: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1331 - acc: 0.9741 - val_loss: 0.5059 - val_acc: 0.8551 Epoch 954/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1398 - acc: 0.9703Epoch 00953: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1395 - acc: 0.9704 - val_loss: 0.5059 - val_acc: 0.8563 Epoch 955/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1321 - acc: 0.9716Epoch 00954: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1322 - acc: 0.9716 - val_loss: 0.5060 - val_acc: 0.8563 Epoch 956/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1287 - acc: 0.9748Epoch 00955: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1286 - acc: 0.9749 - val_loss: 0.5061 - val_acc: 0.8551 Epoch 957/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1298 - acc: 0.9733Epoch 00956: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1304 - acc: 0.9732 - val_loss: 0.5059 - val_acc: 0.8575 Epoch 958/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1337 - acc: 0.9727Epoch 00957: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1336 - acc: 0.9728 - val_loss: 0.5058 - val_acc: 0.8563 Epoch 959/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1336 - acc: 0.9712Epoch 00958: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1334 - acc: 0.9708 - val_loss: 0.5058 - val_acc: 0.8575 Epoch 960/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1279 - acc: 0.9747Epoch 00959: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1278 - acc: 0.9749 - val_loss: 0.5057 - val_acc: 0.8551 Epoch 961/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1287 - acc: 0.9714Epoch 00960: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1289 - acc: 0.9716 - val_loss: 0.5054 - val_acc: 0.8587 Epoch 962/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1271 - acc: 0.9759Epoch 00961: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1278 - acc: 0.9754 - val_loss: 0.5052 - val_acc: 0.8575 Epoch 963/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1297 - acc: 0.9728Epoch 00962: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1296 - acc: 0.9728 - val_loss: 0.5050 - val_acc: 0.8551 Epoch 964/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1341 - acc: 0.9706Epoch 00963: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1341 - acc: 0.9705 - val_loss: 0.5051 - val_acc: 0.8575 Epoch 965/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1300 - acc: 0.9705Epoch 00964: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1307 - acc: 0.9704 - val_loss: 0.5052 - val_acc: 0.8551 Epoch 966/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1331 - acc: 0.9727Epoch 00965: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1329 - acc: 0.9728 - val_loss: 0.5052 - val_acc: 0.8551 Epoch 967/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1336 - acc: 0.9733Epoch 00966: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1334 - acc: 0.9734 - val_loss: 0.5050 - val_acc: 0.8551 Epoch 968/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1341 - acc: 0.9722Epoch 00967: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1333 - acc: 0.9731 - val_loss: 0.5054 - val_acc: 0.8551 Epoch 969/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1285 - acc: 0.9750Epoch 00968: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1287 - acc: 0.9747 - val_loss: 0.5054 - val_acc: 0.8539 Epoch 970/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1299 - acc: 0.9736Epoch 00969: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1307 - acc: 0.9732 - val_loss: 0.5053 - val_acc: 0.8551 Epoch 971/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1340 - acc: 0.9718Epoch 00970: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1341 - acc: 0.9719 - val_loss: 0.5049 - val_acc: 0.8551 Epoch 972/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1318 - acc: 0.9730Epoch 00971: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1315 - acc: 0.9731 - val_loss: 0.5048 - val_acc: 0.8551 Epoch 973/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1274 - acc: 0.9740Epoch 00972: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1274 - acc: 0.9740 - val_loss: 0.5047 - val_acc: 0.8539 Epoch 974/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1335 - acc: 0.9701Epoch 00973: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1333 - acc: 0.9702 - val_loss: 0.5048 - val_acc: 0.8539 Epoch 975/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1272 - acc: 0.9735Epoch 00974: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1272 - acc: 0.9735 - val_loss: 0.5050 - val_acc: 0.8539 Epoch 976/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1267 - acc: 0.9758Epoch 00975: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1264 - acc: 0.9759 - val_loss: 0.5051 - val_acc: 0.8515 Epoch 977/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1287 - acc: 0.9749Epoch 00976: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1287 - acc: 0.9749 - val_loss: 0.5051 - val_acc: 0.8515 Epoch 978/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1269 - acc: 0.9744Epoch 00977: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1272 - acc: 0.9746 - val_loss: 0.5050 - val_acc: 0.8527 Epoch 979/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1304 - acc: 0.9740Epoch 00978: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1303 - acc: 0.9740 - val_loss: 0.5050 - val_acc: 0.8551 Epoch 980/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1304 - acc: 0.9725Epoch 00979: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1305 - acc: 0.9725 - val_loss: 0.5044 - val_acc: 0.8575 Epoch 981/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1312 - acc: 0.9719Epoch 00980: val_loss improved from 0.50436 to 0.50427, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1311 - acc: 0.9720 - val_loss: 0.5043 - val_acc: 0.8587 Epoch 982/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1283 - acc: 0.9733Epoch 00981: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1297 - acc: 0.9729 - val_loss: 0.5046 - val_acc: 0.8551 Epoch 983/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1279 - acc: 0.9750Epoch 00982: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1286 - acc: 0.9749 - val_loss: 0.5043 - val_acc: 0.8551 Epoch 984/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1278 - acc: 0.9755Epoch 00983: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1279 - acc: 0.9754 - val_loss: 0.5047 - val_acc: 0.8563 Epoch 985/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1234 - acc: 0.9748Epoch 00984: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1239 - acc: 0.9746 - val_loss: 0.5045 - val_acc: 0.8563 Epoch 986/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1312 - acc: 0.9730Epoch 00985: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1320 - acc: 0.9728 - val_loss: 0.5046 - val_acc: 0.8563 Epoch 987/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1286 - acc: 0.9735Epoch 00986: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1287 - acc: 0.9737 - val_loss: 0.5047 - val_acc: 0.8563 Epoch 988/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1293 - acc: 0.9739Epoch 00987: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1298 - acc: 0.9738 - val_loss: 0.5045 - val_acc: 0.8551 Epoch 989/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1310 - acc: 0.9733Epoch 00988: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1308 - acc: 0.9734 - val_loss: 0.5043 - val_acc: 0.8551 Epoch 990/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1280 - acc: 0.9748Epoch 00989: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1279 - acc: 0.9749 - val_loss: 0.5044 - val_acc: 0.8551 Epoch 991/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1266 - acc: 0.9739Epoch 00990: val_loss improved from 0.50427 to 0.50404, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1269 - acc: 0.9735 - val_loss: 0.5040 - val_acc: 0.8551 Epoch 992/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1297 - acc: 0.9727Epoch 00991: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1302 - acc: 0.9726 - val_loss: 0.5042 - val_acc: 0.8563 Epoch 993/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1284 - acc: 0.9749Epoch 00992: val_loss improved from 0.50404 to 0.50404, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1283 - acc: 0.9750 - val_loss: 0.5040 - val_acc: 0.8587 Epoch 994/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1259 - acc: 0.9741Epoch 00993: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1260 - acc: 0.9738 - val_loss: 0.5041 - val_acc: 0.8563 Epoch 995/1000 6528/6680 [============================>.] - ETA: 0s - loss: 0.1283 - acc: 0.9732Epoch 00994: val_loss improved from 0.50404 to 0.50389, saving model to saved_models/weights.best.Resnet50.hdf5 6680/6680 [==============================] - 1s - loss: 0.1281 - acc: 0.9732 - val_loss: 0.5039 - val_acc: 0.8563 Epoch 996/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1308 - acc: 0.9734Epoch 00995: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1306 - acc: 0.9735 - val_loss: 0.5044 - val_acc: 0.8575 Epoch 997/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1281 - acc: 0.9736Epoch 00996: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1279 - acc: 0.9734 - val_loss: 0.5045 - val_acc: 0.8563 Epoch 998/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1222 - acc: 0.9784Epoch 00997: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1224 - acc: 0.9781 - val_loss: 0.5047 - val_acc: 0.8587 Epoch 999/1000 6400/6680 [===========================>..] - ETA: 0s - loss: 0.1278 - acc: 0.9750Epoch 00998: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1274 - acc: 0.9754 - val_loss: 0.5047 - val_acc: 0.8563 Epoch 1000/1000 6656/6680 [============================>.] - ETA: 0s - loss: 0.1349 - acc: 0.9691Epoch 00999: val_loss did not improve 6680/6680 [==============================] - 1s - loss: 0.1352 - acc: 0.9689 - val_loss: 0.5047 - val_acc: 0.8587
### TODO: Load the model weights with the best validation loss.
choosing_model.load_weights('saved_models/weights.best.Resnet50.hdf5')
Try out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 60%.
### TODO: Calculate classification accuracy on the test dataset.
model_prediction = [np.argmax(choosing_model.predict(np.expand_dims(feature, axis=0))) for feature in test_model]
test_accuracy = 100*np.sum(np.array(model_prediction)==np.argmax(test_targets, axis=1))/len(model_prediction)
print('Test accuracy: %.4f%%' % test_accuracy)
Test accuracy: 84.3301%
Write a function that takes an image path as input and returns the dog breed (Affenpinscher, Afghan_hound, etc) that is predicted by your model.
Similar to the analogous function in Step 5, your function should have three steps:
dog_names array defined in Step 0 of this notebook to return the corresponding breed.The functions to extract the bottleneck features can be found in extract_bottleneck_features.py, and they have been imported in an earlier code cell. To obtain the bottleneck features corresponding to your chosen CNN architecture, you need to use the function
extract_{network}
where {network}, in the above filename, should be one of VGG19, Resnet50, InceptionV3, or Xception.
### TODO: Write a function that takes a path to an image as input
from extract_bottleneck_features import *
def predict_dog_breed(img_path):
# extract bottleneck features
#print("Inside Prediction Dog Breed")
bottleneck_feature = extract_Resnet50(path_to_tensor(img_path))
# obtain predicted vector
predicted_vector = choosing_model.predict(bottleneck_feature)
# return dog breed that is predicted by the model
return dog_names[np.argmax(predicted_vector)]
### and returns the dog breed that is predicted by the model.
Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,
You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the face_detector and human_detector functions developed above. You are required to use your CNN from Step 5 to predict dog breed.
Some sample output for our algorithm is provided below, but feel free to design your own user experience!

### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.
human_files_short = human_files[:12]
dog_files_short = train_files[:12]
def identify(img_path):
if(face_detector(img_path)):
# load color (BGR) image
img = cv2.imread(img_path)
# display the image, along with bounding box
plt.imshow(img)
plt.show()
print("You look like a ..\n")
print(predict_dog_breed(img_path))
elif(dog_detector(img_path)):
img = cv2.imread(img_path)
# display the image, along with bounding box
plt.imshow(img)
plt.show()
print("Dog breed: ")
print(predict_dog_breed(img_path))
print("Dog breed: ")
else:
img = cv2.imread(img_path)
# display the image, along with bounding box
plt.imshow(img)
plt.show()
print("Image is not reconigzed by the system")
for i in range(len(human_files_short)):
identify(human_files_short[i])
for i in range(len(dog_files_short)):
identify(dog_files_short[i])
You look like a .. American_foxhound
You look like a .. American_water_spaniel
You look like a .. American_water_spaniel
You look like a .. Basenji
You look like a .. American_water_spaniel
You look like a .. American_water_spaniel
You look like a .. Lowchen
You look like a .. American_water_spaniel
You look like a .. American_water_spaniel
You look like a .. American_water_spaniel
You look like a .. Chinese_shar-pei
You look like a .. English_toy_spaniel
Dog breed: Kuvasz Dog breed:
Dog breed: Dalmatian Dog breed:
Dog breed: Irish_water_spaniel Dog breed:
Dog breed: American_staffordshire_terrier Dog breed:
Dog breed: American_staffordshire_terrier Dog breed:
Dog breed: English_springer_spaniel Dog breed:
Dog breed: Collie Dog breed:
Dog breed: Petit_basset_griffon_vendeen Dog breed:
Dog breed: American_water_spaniel Dog breed:
Dog breed: Greyhound Dog breed:
Dog breed: Australian_shepherd Dog breed:
Dog breed: German_shepherd_dog Dog breed:
In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that you look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?
Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.
Question 6: Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.
Answer: Actually the output the same as above problem.
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
# Image path extraction
import random
# load filenames in shuffled human dataset
human_images = np.array(glob("human_profiles/*"))
animal_images = np.array(glob("dog_profiles/*"))
# Prediction algorithm
for i in range(len(human_images)):
identify(human_images[i])
for i in range(len(human_images)):
identify(animal_images[i])
You look like a .. Canaan_dog
You look like a .. Canaan_dog
Dog breed: Anatolian_shepherd_dog Dog breed:
Image is not reconigzed by the system






